2025-05-14 01:40:26.549718 | Job console starting 2025-05-14 01:40:26.561072 | Updating git repos 2025-05-14 01:40:26.646205 | Cloning repos into workspace 2025-05-14 01:40:26.795460 | Restoring repo states 2025-05-14 01:40:26.838949 | Merging changes 2025-05-14 01:40:26.838980 | Checking out repos 2025-05-14 01:40:27.132314 | Preparing playbooks 2025-05-14 01:40:27.793340 | Running Ansible setup 2025-05-14 01:40:33.201342 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-05-14 01:40:33.951818 | 2025-05-14 01:40:33.951996 | PLAY [Base pre] 2025-05-14 01:40:33.969500 | 2025-05-14 01:40:33.969646 | TASK [Setup log path fact] 2025-05-14 01:40:34.000452 | orchestrator | ok 2025-05-14 01:40:34.019301 | 2025-05-14 01:40:34.019518 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-14 01:40:34.061879 | orchestrator | ok 2025-05-14 01:40:34.078546 | 2025-05-14 01:40:34.078734 | TASK [emit-job-header : Print job information] 2025-05-14 01:40:34.126892 | # Job Information 2025-05-14 01:40:34.127141 | Ansible Version: 2.16.14 2025-05-14 01:40:34.127208 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-05-14 01:40:34.127260 | Pipeline: periodic-midnight 2025-05-14 01:40:34.127294 | Executor: 521e9411259a 2025-05-14 01:40:34.127324 | Triggered by: https://github.com/osism/testbed 2025-05-14 01:40:34.127354 | Event ID: b4b94aaca5194ff09f9dc9c718ea9276 2025-05-14 01:40:34.136908 | 2025-05-14 01:40:34.137030 | LOOP [emit-job-header : Print node information] 2025-05-14 01:40:34.264038 | orchestrator | ok: 2025-05-14 01:40:34.264341 | orchestrator | # Node Information 2025-05-14 01:40:34.264417 | orchestrator | Inventory Hostname: orchestrator 2025-05-14 01:40:34.264461 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-05-14 01:40:34.264500 | orchestrator | Username: zuul-testbed06 2025-05-14 01:40:34.264535 | orchestrator | Distro: Debian 12.10 2025-05-14 01:40:34.264579 | orchestrator | Provider: static-testbed 2025-05-14 01:40:34.264616 | orchestrator | Region: 2025-05-14 01:40:34.264652 | orchestrator | Label: testbed-orchestrator 2025-05-14 01:40:34.264686 | orchestrator | Product Name: OpenStack Nova 2025-05-14 01:40:34.264719 | orchestrator | Interface IP: 81.163.193.140 2025-05-14 01:40:34.292440 | 2025-05-14 01:40:34.292597 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-05-14 01:40:34.786622 | orchestrator -> localhost | changed 2025-05-14 01:40:34.804546 | 2025-05-14 01:40:34.804721 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-05-14 01:40:35.904880 | orchestrator -> localhost | changed 2025-05-14 01:40:35.933623 | 2025-05-14 01:40:35.933804 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-05-14 01:40:36.254609 | orchestrator -> localhost | ok 2025-05-14 01:40:36.270809 | 2025-05-14 01:40:36.271015 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-05-14 01:40:36.310666 | orchestrator | ok 2025-05-14 01:40:36.331121 | orchestrator | included: /var/lib/zuul/builds/7d8a8fd7f89a456f89cad5df4058c4c4/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-05-14 01:40:36.339038 | 2025-05-14 01:40:36.339139 | TASK [add-build-sshkey : Create Temp SSH key] 2025-05-14 01:40:37.870335 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-05-14 01:40:37.870645 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/7d8a8fd7f89a456f89cad5df4058c4c4/work/7d8a8fd7f89a456f89cad5df4058c4c4_id_rsa 2025-05-14 01:40:37.870694 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/7d8a8fd7f89a456f89cad5df4058c4c4/work/7d8a8fd7f89a456f89cad5df4058c4c4_id_rsa.pub 2025-05-14 01:40:37.870721 | orchestrator -> localhost | The key fingerprint is: 2025-05-14 01:40:37.870746 | orchestrator -> localhost | SHA256:jut/GVV2gziyELenWmhhOBeKvE2fOAe7pFjdRBYmMdI zuul-build-sshkey 2025-05-14 01:40:37.870768 | orchestrator -> localhost | The key's randomart image is: 2025-05-14 01:40:37.870804 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-05-14 01:40:37.870828 | orchestrator -> localhost | | ..+.Bo. . . | 2025-05-14 01:40:37.870871 | orchestrator -> localhost | | . oEO.o..o .o..| 2025-05-14 01:40:37.870892 | orchestrator -> localhost | | o * =..o..o ..| 2025-05-14 01:40:37.870913 | orchestrator -> localhost | | = @ +.o . | 2025-05-14 01:40:37.870934 | orchestrator -> localhost | | o B BSo . | 2025-05-14 01:40:37.870960 | orchestrator -> localhost | | o o =oo . | 2025-05-14 01:40:37.870981 | orchestrator -> localhost | | . . .... o | 2025-05-14 01:40:37.871001 | orchestrator -> localhost | | . o | 2025-05-14 01:40:37.871022 | orchestrator -> localhost | | .o... | 2025-05-14 01:40:37.871043 | orchestrator -> localhost | +----[SHA256]-----+ 2025-05-14 01:40:37.871098 | orchestrator -> localhost | ok: Runtime: 0:00:01.049051 2025-05-14 01:40:37.879279 | 2025-05-14 01:40:37.879455 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-05-14 01:40:37.909560 | orchestrator | ok 2025-05-14 01:40:37.920559 | orchestrator | included: /var/lib/zuul/builds/7d8a8fd7f89a456f89cad5df4058c4c4/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-05-14 01:40:37.931008 | 2025-05-14 01:40:37.931110 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-05-14 01:40:37.955213 | orchestrator | skipping: Conditional result was False 2025-05-14 01:40:37.971620 | 2025-05-14 01:40:37.971764 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-05-14 01:40:38.581135 | orchestrator | changed 2025-05-14 01:40:38.590732 | 2025-05-14 01:40:38.590876 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-05-14 01:40:38.893803 | orchestrator | ok 2025-05-14 01:40:38.903890 | 2025-05-14 01:40:38.904036 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-05-14 01:40:39.373783 | orchestrator | ok 2025-05-14 01:40:39.382448 | 2025-05-14 01:40:39.382580 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-05-14 01:40:39.818053 | orchestrator | ok 2025-05-14 01:40:39.826979 | 2025-05-14 01:40:39.827097 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-05-14 01:40:39.851074 | orchestrator | skipping: Conditional result was False 2025-05-14 01:40:39.862714 | 2025-05-14 01:40:39.863487 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-05-14 01:40:40.345852 | orchestrator -> localhost | changed 2025-05-14 01:40:40.370945 | 2025-05-14 01:40:40.371096 | TASK [add-build-sshkey : Add back temp key] 2025-05-14 01:40:40.731681 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/7d8a8fd7f89a456f89cad5df4058c4c4/work/7d8a8fd7f89a456f89cad5df4058c4c4_id_rsa (zuul-build-sshkey) 2025-05-14 01:40:40.732194 | orchestrator -> localhost | ok: Runtime: 0:00:00.018235 2025-05-14 01:40:40.745301 | 2025-05-14 01:40:40.745471 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-05-14 01:40:41.200794 | orchestrator | ok 2025-05-14 01:40:41.209029 | 2025-05-14 01:40:41.209159 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-05-14 01:40:41.243997 | orchestrator | skipping: Conditional result was False 2025-05-14 01:40:41.306517 | 2025-05-14 01:40:41.306653 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-05-14 01:40:41.740068 | orchestrator | ok 2025-05-14 01:40:41.756458 | 2025-05-14 01:40:41.756591 | TASK [validate-host : Define zuul_info_dir fact] 2025-05-14 01:40:41.804180 | orchestrator | ok 2025-05-14 01:40:41.814825 | 2025-05-14 01:40:41.814980 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-05-14 01:40:42.144650 | orchestrator -> localhost | ok 2025-05-14 01:40:42.159873 | 2025-05-14 01:40:42.160042 | TASK [validate-host : Collect information about the host] 2025-05-14 01:40:43.420790 | orchestrator | ok 2025-05-14 01:40:43.438116 | 2025-05-14 01:40:43.438254 | TASK [validate-host : Sanitize hostname] 2025-05-14 01:40:43.503427 | orchestrator | ok 2025-05-14 01:40:43.511512 | 2025-05-14 01:40:43.511643 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-05-14 01:40:44.083999 | orchestrator -> localhost | changed 2025-05-14 01:40:44.098634 | 2025-05-14 01:40:44.098830 | TASK [validate-host : Collect information about zuul worker] 2025-05-14 01:40:44.540520 | orchestrator | ok 2025-05-14 01:40:44.548363 | 2025-05-14 01:40:44.548531 | TASK [validate-host : Write out all zuul information for each host] 2025-05-14 01:40:45.111034 | orchestrator -> localhost | changed 2025-05-14 01:40:45.131136 | 2025-05-14 01:40:45.131266 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-05-14 01:40:45.441539 | orchestrator | ok 2025-05-14 01:40:45.451223 | 2025-05-14 01:40:45.451359 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-05-14 01:41:01.717726 | orchestrator | changed: 2025-05-14 01:41:01.718110 | orchestrator | .d..t...... src/ 2025-05-14 01:41:01.718167 | orchestrator | .d..t...... src/github.com/ 2025-05-14 01:41:01.718205 | orchestrator | .d..t...... src/github.com/osism/ 2025-05-14 01:41:01.718237 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-05-14 01:41:01.718266 | orchestrator | RedHat.yml 2025-05-14 01:41:01.731621 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-05-14 01:41:01.731639 | orchestrator | RedHat.yml 2025-05-14 01:41:01.731694 | orchestrator | = 1.53.0"... 2025-05-14 01:41:13.606919 | orchestrator | 01:41:13.606 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-05-14 01:41:13.679687 | orchestrator | 01:41:13.679 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-05-14 01:41:14.913418 | orchestrator | 01:41:14.913 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.0.0... 2025-05-14 01:41:16.141386 | orchestrator | 01:41:16.141 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.0.0 (signed, key ID 4F80527A391BEFD2) 2025-05-14 01:41:17.145934 | orchestrator | 01:41:17.145 STDOUT terraform: - Installing hashicorp/local v2.5.2... 2025-05-14 01:41:17.994476 | orchestrator | 01:41:17.994 STDOUT terraform: - Installed hashicorp/local v2.5.2 (signed, key ID 0C0AF313E5FD9F80) 2025-05-14 01:41:18.645895 | orchestrator | 01:41:18.645 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-05-14 01:41:19.622741 | orchestrator | 01:41:19.622 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-05-14 01:41:19.622816 | orchestrator | 01:41:19.622 STDOUT terraform: Providers are signed by their developers. 2025-05-14 01:41:19.622884 | orchestrator | 01:41:19.622 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-05-14 01:41:19.622956 | orchestrator | 01:41:19.622 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-05-14 01:41:19.623076 | orchestrator | 01:41:19.622 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-05-14 01:41:19.623216 | orchestrator | 01:41:19.623 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-05-14 01:41:19.623349 | orchestrator | 01:41:19.623 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-05-14 01:41:19.623443 | orchestrator | 01:41:19.623 STDOUT terraform: you run "tofu init" in the future. 2025-05-14 01:41:19.623529 | orchestrator | 01:41:19.623 STDOUT terraform: OpenTofu has been successfully initialized! 2025-05-14 01:41:19.623648 | orchestrator | 01:41:19.623 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-05-14 01:41:19.623796 | orchestrator | 01:41:19.623 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-05-14 01:41:19.623831 | orchestrator | 01:41:19.623 STDOUT terraform: should now work. 2025-05-14 01:41:19.623969 | orchestrator | 01:41:19.623 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-05-14 01:41:19.624130 | orchestrator | 01:41:19.623 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-05-14 01:41:19.624254 | orchestrator | 01:41:19.624 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-05-14 01:41:20.744058 | orchestrator | 01:41:20.743 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-05-14 01:41:20.974202 | orchestrator | 01:41:20.973 STDOUT terraform: Created and switched to workspace "ci"! 2025-05-14 01:41:20.974296 | orchestrator | 01:41:20.973 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-05-14 01:41:20.974532 | orchestrator | 01:41:20.973 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-05-14 01:41:20.974551 | orchestrator | 01:41:20.974 STDOUT terraform: for this configuration. 2025-05-14 01:41:21.235809 | orchestrator | 01:41:21.235 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-05-14 01:41:21.359648 | orchestrator | 01:41:21.359 STDOUT terraform: ci.auto.tfvars 2025-05-14 01:41:21.393503 | orchestrator | 01:41:21.393 STDOUT terraform: default_custom.tf 2025-05-14 01:41:21.592927 | orchestrator | 01:41:21.592 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-05-14 01:41:22.604501 | orchestrator | 01:41:22.604 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-05-14 01:41:23.124089 | orchestrator | 01:41:23.123 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-05-14 01:41:23.325248 | orchestrator | 01:41:23.324 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-05-14 01:41:23.325359 | orchestrator | 01:41:23.325 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-05-14 01:41:23.325394 | orchestrator | 01:41:23.325 STDOUT terraform:  + create 2025-05-14 01:41:23.325404 | orchestrator | 01:41:23.325 STDOUT terraform:  <= read (data resources) 2025-05-14 01:41:23.325459 | orchestrator | 01:41:23.325 STDOUT terraform: OpenTofu will perform the following actions: 2025-05-14 01:41:23.325765 | orchestrator | 01:41:23.325 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-05-14 01:41:23.325798 | orchestrator | 01:41:23.325 STDOUT terraform:  # (config refers to values not yet known) 2025-05-14 01:41:23.325856 | orchestrator | 01:41:23.325 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-05-14 01:41:23.325931 | orchestrator | 01:41:23.325 STDOUT terraform:  + checksum = (known after apply) 2025-05-14 01:41:23.325961 | orchestrator | 01:41:23.325 STDOUT terraform:  + created_at = (known after apply) 2025-05-14 01:41:23.326029 | orchestrator | 01:41:23.325 STDOUT terraform:  + file = (known after apply) 2025-05-14 01:41:23.326106 | orchestrator | 01:41:23.326 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.326155 | orchestrator | 01:41:23.326 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 01:41:23.326207 | orchestrator | 01:41:23.326 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-14 01:41:23.326267 | orchestrator | 01:41:23.326 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-14 01:41:23.326298 | orchestrator | 01:41:23.326 STDOUT terraform:  + most_recent = true 2025-05-14 01:41:23.326380 | orchestrator | 01:41:23.326 STDOUT terraform:  + name = (known after apply) 2025-05-14 01:41:23.326453 | orchestrator | 01:41:23.326 STDOUT terraform:  + protected = (known after apply) 2025-05-14 01:41:23.326507 | orchestrator | 01:41:23.326 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.326564 | orchestrator | 01:41:23.326 STDOUT terraform:  + schema = (known after apply) 2025-05-14 01:41:23.326610 | orchestrator | 01:41:23.326 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-14 01:41:23.326673 | orchestrator | 01:41:23.326 STDOUT terraform:  + tags = (known after apply) 2025-05-14 01:41:23.326710 | orchestrator | 01:41:23.326 STDOUT terraform:  + updated_at = (known after apply) 2025-05-14 01:41:23.326745 | orchestrator | 01:41:23.326 STDOUT terraform:  } 2025-05-14 01:41:23.326838 | orchestrator | 01:41:23.326 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-05-14 01:41:23.326878 | orchestrator | 01:41:23.326 STDOUT terraform:  # (config refers to values not yet known) 2025-05-14 01:41:23.326944 | orchestrator | 01:41:23.326 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-05-14 01:41:23.326989 | orchestrator | 01:41:23.326 STDOUT terraform:  + checksum = (known after apply) 2025-05-14 01:41:23.327034 | orchestrator | 01:41:23.326 STDOUT terraform:  + created_at = (known after apply) 2025-05-14 01:41:23.327092 | orchestrator | 01:41:23.327 STDOUT terraform:  + file = (known after apply) 2025-05-14 01:41:23.327130 | orchestrator | 01:41:23.327 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.327186 | orchestrator | 01:41:23.327 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 01:41:23.327223 | orchestrator | 01:41:23.327 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-14 01:41:23.327280 | orchestrator | 01:41:23.327 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-14 01:41:23.327345 | orchestrator | 01:41:23.327 STDOUT terraform:  + most_recent = true 2025-05-14 01:41:23.327420 | orchestrator | 01:41:23.327 STDOUT terraform:  + name = (known after apply) 2025-05-14 01:41:23.327454 | orchestrator | 01:41:23.327 STDOUT terraform:  + protected = (known after apply) 2025-05-14 01:41:23.327504 | orchestrator | 01:41:23.327 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.327551 | orchestrator | 01:41:23.327 STDOUT terraform:  + schema = (known after apply) 2025-05-14 01:41:23.327605 | orchestrator | 01:41:23.327 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-14 01:41:23.327652 | orchestrator | 01:41:23.327 STDOUT terraform:  + tags = (known after apply) 2025-05-14 01:41:23.327691 | orchestrator | 01:41:23.327 STDOUT terraform:  + updated_at = (known after apply) 2025-05-14 01:41:23.327712 | orchestrator | 01:41:23.327 STDOUT terraform:  } 2025-05-14 01:41:23.327775 | orchestrator | 01:41:23.327 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-05-14 01:41:23.327830 | orchestrator | 01:41:23.327 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-05-14 01:41:23.327881 | orchestrator | 01:41:23.327 STDOUT terraform:  + content = (known after apply) 2025-05-14 01:41:23.327939 | orchestrator | 01:41:23.327 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-14 01:41:23.328027 | orchestrator | 01:41:23.327 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-14 01:41:23.328119 | orchestrator | 01:41:23.328 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-14 01:41:23.328222 | orchestrator | 01:41:23.328 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-14 01:41:23.328274 | orchestrator | 01:41:23.328 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-14 01:41:23.328333 | orchestrator | 01:41:23.328 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-14 01:41:23.328420 | orchestrator | 01:41:23.328 STDOUT terraform:  + directory_permission = "0777" 2025-05-14 01:41:23.328480 | orchestrator | 01:41:23.328 STDOUT terraform:  + file_permission = "0644" 2025-05-14 01:41:23.328533 | orchestrator | 01:41:23.328 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-05-14 01:41:23.328598 | orchestrator | 01:41:23.328 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.328621 | orchestrator | 01:41:23.328 STDOUT terraform:  } 2025-05-14 01:41:23.328683 | orchestrator | 01:41:23.328 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-05-14 01:41:23.328738 | orchestrator | 01:41:23.328 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-05-14 01:41:23.328786 | orchestrator | 01:41:23.328 STDOUT terraform:  + content = (known after apply) 2025-05-14 01:41:23.328844 | orchestrator | 01:41:23.328 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-14 01:41:23.328914 | orchestrator | 01:41:23.328 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-14 01:41:23.328955 | orchestrator | 01:41:23.328 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-14 01:41:23.329015 | orchestrator | 01:41:23.328 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-14 01:41:23.329058 | orchestrator | 01:41:23.329 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-14 01:41:23.329124 | orchestrator | 01:41:23.329 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-14 01:41:23.329135 | orchestrator | 01:41:23.329 STDOUT terraform:  + directory_permission = "0777" 2025-05-14 01:41:23.329175 | orchestrator | 01:41:23.329 STDOUT terraform:  + file_permission = "0644" 2025-05-14 01:41:23.329221 | orchestrator | 01:41:23.329 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-05-14 01:41:23.329281 | orchestrator | 01:41:23.329 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.329290 | orchestrator | 01:41:23.329 STDOUT terraform:  } 2025-05-14 01:41:23.329320 | orchestrator | 01:41:23.329 STDOUT terraform:  # local_file.inventory will be created 2025-05-14 01:41:23.329355 | orchestrator | 01:41:23.329 STDOUT terraform:  + resource "local_file" "inventory" { 2025-05-14 01:41:23.329418 | orchestrator | 01:41:23.329 STDOUT terraform:  + content = (known after apply) 2025-05-14 01:41:23.329479 | orchestrator | 01:41:23.329 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-14 01:41:23.329518 | orchestrator | 01:41:23.329 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-14 01:41:23.329579 | orchestrator | 01:41:23.329 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-14 01:41:23.329623 | orchestrator | 01:41:23.329 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-14 01:41:23.329681 | orchestrator | 01:41:23.329 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-14 01:41:23.329721 | orchestrator | 01:41:23.329 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-14 01:41:23.329762 | orchestrator | 01:41:23.329 STDOUT terraform:  + directory_permission = "0777" 2025-05-14 01:41:23.329789 | orchestrator | 01:41:23.329 STDOUT terraform:  + file_permission = "0644" 2025-05-14 01:41:23.329845 | orchestrator | 01:41:23.329 STDOUT terraform:  + filename = "inventory.ci" 2025-05-14 01:41:23.329893 | orchestrator | 01:41:23.329 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.329924 | orchestrator | 01:41:23.329 STDOUT terraform:  } 2025-05-14 01:41:23.329956 | orchestrator | 01:41:23.329 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-05-14 01:41:23.330004 | orchestrator | 01:41:23.329 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-05-14 01:41:23.330077 | orchestrator | 01:41:23.329 STDOUT terraform:  + content = (sensitive value) 2025-05-14 01:41:23.330118 | orchestrator | 01:41:23.330 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-14 01:41:23.330171 | orchestrator | 01:41:23.330 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-14 01:41:23.330220 | orchestrator | 01:41:23.330 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-14 01:41:23.330270 | orchestrator | 01:41:23.330 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-14 01:41:23.330337 | orchestrator | 01:41:23.330 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-14 01:41:23.330427 | orchestrator | 01:41:23.330 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-14 01:41:23.330462 | orchestrator | 01:41:23.330 STDOUT terraform:  + directory_permission = "0700" 2025-05-14 01:41:23.330497 | orchestrator | 01:41:23.330 STDOUT terraform:  + file_permission = "0600" 2025-05-14 01:41:23.330541 | orchestrator | 01:41:23.330 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-05-14 01:41:23.330594 | orchestrator | 01:41:23.330 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.330612 | orchestrator | 01:41:23.330 STDOUT terraform:  } 2025-05-14 01:41:23.330655 | orchestrator | 01:41:23.330 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-05-14 01:41:23.330698 | orchestrator | 01:41:23.330 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-05-14 01:41:23.330729 | orchestrator | 01:41:23.330 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.330749 | orchestrator | 01:41:23.330 STDOUT terraform:  } 2025-05-14 01:41:23.330822 | orchestrator | 01:41:23.330 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-05-14 01:41:23.330890 | orchestrator | 01:41:23.330 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-05-14 01:41:23.330931 | orchestrator | 01:41:23.330 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 01:41:23.330958 | orchestrator | 01:41:23.330 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 01:41:23.331000 | orchestrator | 01:41:23.330 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.331051 | orchestrator | 01:41:23.330 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 01:41:23.331096 | orchestrator | 01:41:23.331 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 01:41:23.331151 | orchestrator | 01:41:23.331 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-05-14 01:41:23.331199 | orchestrator | 01:41:23.331 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.331233 | orchestrator | 01:41:23.331 STDOUT terraform:  + size = 80 2025-05-14 01:41:23.331282 | orchestrator | 01:41:23.331 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 01:41:23.331311 | orchestrator | 01:41:23.331 STDOUT terraform:  } 2025-05-14 01:41:23.331502 | orchestrator | 01:41:23.331 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-05-14 01:41:23.331568 | orchestrator | 01:41:23.331 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-14 01:41:23.331614 | orchestrator | 01:41:23.331 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 01:41:23.331643 | orchestrator | 01:41:23.331 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 01:41:23.331689 | orchestrator | 01:41:23.331 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.331733 | orchestrator | 01:41:23.331 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 01:41:23.331775 | orchestrator | 01:41:23.331 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 01:41:23.331832 | orchestrator | 01:41:23.331 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-05-14 01:41:23.331875 | orchestrator | 01:41:23.331 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.331909 | orchestrator | 01:41:23.331 STDOUT terraform:  + size = 80 2025-05-14 01:41:23.331938 | orchestrator | 01:41:23.331 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 01:41:23.331957 | orchestrator | 01:41:23.331 STDOUT terraform:  } 2025-05-14 01:41:23.332025 | orchestrator | 01:41:23.331 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-05-14 01:41:23.332089 | orchestrator | 01:41:23.332 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-14 01:41:23.332134 | orchestrator | 01:41:23.332 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 01:41:23.332164 | orchestrator | 01:41:23.332 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 01:41:23.332211 | orchestrator | 01:41:23.332 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.332252 | orchestrator | 01:41:23.332 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 01:41:23.332296 | orchestrator | 01:41:23.332 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 01:41:23.332352 | orchestrator | 01:41:23.332 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-05-14 01:41:23.332420 | orchestrator | 01:41:23.332 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.332450 | orchestrator | 01:41:23.332 STDOUT terraform:  + size = 80 2025-05-14 01:41:23.332480 | orchestrator | 01:41:23.332 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 01:41:23.332507 | orchestrator | 01:41:23.332 STDOUT terraform:  } 2025-05-14 01:41:23.332609 | orchestrator | 01:41:23.332 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-05-14 01:41:23.332677 | orchestrator | 01:41:23.332 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-14 01:41:23.332718 | orchestrator | 01:41:23.332 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 01:41:23.332745 | orchestrator | 01:41:23.332 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 01:41:23.332784 | orchestrator | 01:41:23.332 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.332823 | orchestrator | 01:41:23.332 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 01:41:23.332863 | orchestrator | 01:41:23.332 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 01:41:23.332913 | orchestrator | 01:41:23.332 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-05-14 01:41:23.332952 | orchestrator | 01:41:23.332 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.332980 | orchestrator | 01:41:23.332 STDOUT terraform:  + size = 80 2025-05-14 01:41:23.333022 | orchestrator | 01:41:23.332 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 01:41:23.333028 | orchestrator | 01:41:23.333 STDOUT terraform:  } 2025-05-14 01:41:23.333084 | orchestrator | 01:41:23.333 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-05-14 01:41:23.333142 | orchestrator | 01:41:23.333 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-14 01:41:23.333191 | orchestrator | 01:41:23.333 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 01:41:23.333216 | orchestrator | 01:41:23.333 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 01:41:23.333256 | orchestrator | 01:41:23.333 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.333294 | orchestrator | 01:41:23.333 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 01:41:23.333333 | orchestrator | 01:41:23.333 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 01:41:23.333404 | orchestrator | 01:41:23.333 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-05-14 01:41:23.333428 | orchestrator | 01:41:23.333 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.333467 | orchestrator | 01:41:23.333 STDOUT terraform:  + size = 80 2025-05-14 01:41:23.333496 | orchestrator | 01:41:23.333 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 01:41:23.333514 | orchestrator | 01:41:23.333 STDOUT terraform:  } 2025-05-14 01:41:23.333581 | orchestrator | 01:41:23.333 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-05-14 01:41:23.333638 | orchestrator | 01:41:23.333 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-14 01:41:23.333676 | orchestrator | 01:41:23.333 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 01:41:23.333703 | orchestrator | 01:41:23.333 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 01:41:23.333743 | orchestrator | 01:41:23.333 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.333782 | orchestrator | 01:41:23.333 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 01:41:23.333822 | orchestrator | 01:41:23.333 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 01:41:23.333873 | orchestrator | 01:41:23.333 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-05-14 01:41:23.333912 | orchestrator | 01:41:23.333 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.333939 | orchestrator | 01:41:23.333 STDOUT terraform:  + size = 80 2025-05-14 01:41:23.333966 | orchestrator | 01:41:23.333 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 01:41:23.333976 | orchestrator | 01:41:23.333 STDOUT terraform:  } 2025-05-14 01:41:23.334058 | orchestrator | 01:41:23.333 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-05-14 01:41:23.334113 | orchestrator | 01:41:23.334 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-14 01:41:23.334155 | orchestrator | 01:41:23.334 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 01:41:23.334186 | orchestrator | 01:41:23.334 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 01:41:23.334221 | orchestrator | 01:41:23.334 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.334260 | orchestrator | 01:41:23.334 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 01:41:23.334299 | orchestrator | 01:41:23.334 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 01:41:23.334347 | orchestrator | 01:41:23.334 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-05-14 01:41:23.334414 | orchestrator | 01:41:23.334 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.334440 | orchestrator | 01:41:23.334 STDOUT terraform:  + size = 80 2025-05-14 01:41:23.334466 | orchestrator | 01:41:23.334 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 01:41:23.334483 | orchestrator | 01:41:23.334 STDOUT terraform:  } 2025-05-14 01:41:23.334540 | orchestrator | 01:41:23.334 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-05-14 01:41:23.334594 | orchestrator | 01:41:23.334 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-14 01:41:23.334632 | orchestrator | 01:41:23.334 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 01:41:23.334660 | orchestrator | 01:41:23.334 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 01:41:23.334698 | orchestrator | 01:41:23.334 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.334735 | orchestrator | 01:41:23.334 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 01:41:23.334778 | orchestrator | 01:41:23.334 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-05-14 01:41:23.334816 | orchestrator | 01:41:23.334 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.334833 | orchestrator | 01:41:23.334 STDOUT terraform:  + size = 20 2025-05-14 01:41:23.334860 | orchestrator | 01:41:23.334 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 01:41:23.334878 | orchestrator | 01:41:23.334 STDOUT terraform:  } 2025-05-14 01:41:23.334931 | orchestrator | 01:41:23.334 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-05-14 01:41:23.334983 | orchestrator | 01:41:23.334 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-14 01:41:23.335021 | orchestrator | 01:41:23.334 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 01:41:23.335048 | orchestrator | 01:41:23.335 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 01:41:23.335087 | orchestrator | 01:41:23.335 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.335125 | orchestrator | 01:41:23.335 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 01:41:23.335170 | orchestrator | 01:41:23.335 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-05-14 01:41:23.335207 | orchestrator | 01:41:23.335 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.335233 | orchestrator | 01:41:23.335 STDOUT terraform:  + size = 20 2025-05-14 01:41:23.335262 | orchestrator | 01:41:23.335 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 01:41:23.335269 | orchestrator | 01:41:23.335 STDOUT terraform:  } 2025-05-14 01:41:23.335323 | orchestrator | 01:41:23.335 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-05-14 01:41:23.335387 | orchestrator | 01:41:23.335 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-14 01:41:23.335422 | orchestrator | 01:41:23.335 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 01:41:23.335447 | orchestrator | 01:41:23.335 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 01:41:23.335486 | orchestrator | 01:41:23.335 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.335524 | orchestrator | 01:41:23.335 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 01:41:23.335570 | orchestrator | 01:41:23.335 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-05-14 01:41:23.335608 | orchestrator | 01:41:23.335 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.335643 | orchestrator | 01:41:23.335 STDOUT terraform:  + size = 20 2025-05-14 01:41:23.335652 | orchestrator | 01:41:23.335 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 01:41:23.338199 | orchestrator | 01:41:23.335 STDOUT terraform:  } 2025-05-14 01:41:23.338272 | orchestrator | 01:41:23.335 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-05-14 01:41:23.338288 | orchestrator | 01:41:23.335 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-14 01:41:23.338300 | orchestrator | 01:41:23.335 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 01:41:23.338310 | orchestrator | 01:41:23.335 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 01:41:23.338320 | orchestrator | 01:41:23.335 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.338348 | orchestrator | 01:41:23.336 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 01:41:23.338358 | orchestrator | 01:41:23.336 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-05-14 01:41:23.338398 | orchestrator | 01:41:23.336 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.338408 | orchestrator | 01:41:23.336 STDOUT terraform:  + size = 20 2025-05-14 01:41:23.338418 | orchestrator | 01:41:23.336 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 01:41:23.338429 | orchestrator | 01:41:23.336 STDOUT terraform:  } 2025-05-14 01:41:23.338439 | orchestrator | 01:41:23.336 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-05-14 01:41:23.338449 | orchestrator | 01:41:23.336 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-14 01:41:23.338460 | orchestrator | 01:41:23.336 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 01:41:23.338470 | orchestrator | 01:41:23.336 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 01:41:23.338481 | orchestrator | 01:41:23.336 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.338491 | orchestrator | 01:41:23.336 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 01:41:23.338500 | orchestrator | 01:41:23.336 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-05-14 01:41:23.338510 | orchestrator | 01:41:23.336 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.338530 | orchestrator | 01:41:23.336 STDOUT terraform:  + size = 20 2025-05-14 01:41:23.338541 | orchestrator | 01:41:23.336 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 01:41:23.338551 | orchestrator | 01:41:23.336 STDOUT terraform:  } 2025-05-14 01:41:23.338561 | orchestrator | 01:41:23.336 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-05-14 01:41:23.338571 | orchestrator | 01:41:23.336 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-14 01:41:23.338581 | orchestrator | 01:41:23.336 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 01:41:23.338591 | orchestrator | 01:41:23.336 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 01:41:23.338601 | orchestrator | 01:41:23.336 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.338611 | orchestrator | 01:41:23.336 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 01:41:23.338621 | orchestrator | 01:41:23.336 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-05-14 01:41:23.338631 | orchestrator | 01:41:23.336 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.338642 | orchestrator | 01:41:23.336 STDOUT terraform:  + size = 20 2025-05-14 01:41:23.338652 | orchestrator | 01:41:23.336 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 01:41:23.338662 | orchestrator | 01:41:23.336 STDOUT terraform:  } 2025-05-14 01:41:23.338672 | orchestrator | 01:41:23.336 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-05-14 01:41:23.338682 | orchestrator | 01:41:23.336 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-14 01:41:23.338699 | orchestrator | 01:41:23.336 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 01:41:23.338726 | orchestrator | 01:41:23.336 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 01:41:23.338737 | orchestrator | 01:41:23.336 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.338746 | orchestrator | 01:41:23.336 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 01:41:23.338756 | orchestrator | 01:41:23.336 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-05-14 01:41:23.338766 | orchestrator | 01:41:23.337 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.338775 | orchestrator | 01:41:23.337 STDOUT terraform:  + size = 20 2025-05-14 01:41:23.338785 | orchestrator | 01:41:23.337 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 01:41:23.338795 | orchestrator | 01:41:23.337 STDOUT terraform:  } 2025-05-14 01:41:23.338804 | orchestrator | 01:41:23.337 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-05-14 01:41:23.338814 | orchestrator | 01:41:23.337 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-14 01:41:23.338824 | orchestrator | 01:41:23.337 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 01:41:23.338833 | orchestrator | 01:41:23.337 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 01:41:23.338843 | orchestrator | 01:41:23.337 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.338853 | orchestrator | 01:41:23.337 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 01:41:23.338863 | orchestrator | 01:41:23.337 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-05-14 01:41:23.338872 | orchestrator | 01:41:23.337 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.338882 | orchestrator | 01:41:23.337 STDOUT terraform:  + size = 20 2025-05-14 01:41:23.338892 | orchestrator | 01:41:23.337 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 01:41:23.338901 | orchestrator | 01:41:23.337 STDOUT terraform:  } 2025-05-14 01:41:23.338911 | orchestrator | 01:41:23.337 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-05-14 01:41:23.338926 | orchestrator | 01:41:23.337 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-14 01:41:23.338936 | orchestrator | 01:41:23.337 STDOUT terraform:  + attachment = (known after apply) 2025-05-14 01:41:23.338946 | orchestrator | 01:41:23.337 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 01:41:23.338956 | orchestrator | 01:41:23.337 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.338965 | orchestrator | 01:41:23.337 STDOUT terraform:  + metadata = (known after apply) 2025-05-14 01:41:23.338975 | orchestrator | 01:41:23.337 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-05-14 01:41:23.338984 | orchestrator | 01:41:23.337 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.338994 | orchestrator | 01:41:23.337 STDOUT terraform:  + size = 20 2025-05-14 01:41:23.339004 | orchestrator | 01:41:23.337 STDOUT terraform:  + volume_type = "ssd" 2025-05-14 01:41:23.339028 | orchestrator | 01:41:23.337 STDOUT terraform:  } 2025-05-14 01:41:23.339038 | orchestrator | 01:41:23.337 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-05-14 01:41:23.339048 | orchestrator | 01:41:23.337 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-05-14 01:41:23.339058 | orchestrator | 01:41:23.337 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-14 01:41:23.339067 | orchestrator | 01:41:23.337 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-14 01:41:23.339077 | orchestrator | 01:41:23.337 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-14 01:41:23.339086 | orchestrator | 01:41:23.337 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 01:41:23.339096 | orchestrator | 01:41:23.337 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 01:41:23.339105 | orchestrator | 01:41:23.337 STDOUT terraform:  + config_drive = true 2025-05-14 01:41:23.339120 | orchestrator | 01:41:23.337 STDOUT terraform:  + created = (known after apply) 2025-05-14 01:41:23.339130 | orchestrator | 01:41:23.337 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-14 01:41:23.339140 | orchestrator | 01:41:23.338 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-05-14 01:41:23.339149 | orchestrator | 01:41:23.338 STDOUT terraform:  + force_delete = false 2025-05-14 01:41:23.339159 | orchestrator | 01:41:23.338 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.339168 | orchestrator | 01:41:23.338 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 01:41:23.339178 | orchestrator | 01:41:23.338 STDOUT terraform:  + image_name = (known after apply) 2025-05-14 01:41:23.339188 | orchestrator | 01:41:23.338 STDOUT terraform:  + key_pair = "testbed" 2025-05-14 01:41:23.339197 | orchestrator | 01:41:23.338 STDOUT terraform:  + name = "testbed-manager" 2025-05-14 01:41:23.339207 | orchestrator | 01:41:23.338 STDOUT terraform:  + power_state = "active" 2025-05-14 01:41:23.339217 | orchestrator | 01:41:23.338 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.339227 | orchestrator | 01:41:23.338 STDOUT terraform:  + security_groups = (known after apply) 2025-05-14 01:41:23.339236 | orchestrator | 01:41:23.338 STDOUT terraform:  + stop_before_destroy = false 2025-05-14 01:41:23.339246 | orchestrator | 01:41:23.338 STDOUT terraform:  + updated = (known after apply) 2025-05-14 01:41:23.339256 | orchestrator | 01:41:23.338 STDOUT terraform:  + user_data = (known after apply) 2025-05-14 01:41:23.339265 | orchestrator | 01:41:23.338 STDOUT terraform:  + block_device { 2025-05-14 01:41:23.339275 | orchestrator | 01:41:23.338 STDOUT terraform:  + boot_index = 0 2025-05-14 01:41:23.339285 | orchestrator | 01:41:23.338 STDOUT terraform:  + delete_on_termination = false 2025-05-14 01:41:23.339294 | orchestrator | 01:41:23.338 STDOUT terraform:  + destination_type = "volume" 2025-05-14 01:41:23.339304 | orchestrator | 01:41:23.338 STDOUT terraform:  + multiattach = false 2025-05-14 01:41:23.339314 | orchestrator | 01:41:23.338 STDOUT terraform:  + source_type = "volume" 2025-05-14 01:41:23.339330 | orchestrator | 01:41:23.338 STDOUT terraform:  + uuid = (known after apply) 2025-05-14 01:41:23.339339 | orchestrator | 01:41:23.338 STDOUT terraform:  } 2025-05-14 01:41:23.339354 | orchestrator | 01:41:23.338 STDOUT terraform:  + network { 2025-05-14 01:41:23.339391 | orchestrator | 01:41:23.338 STDOUT terraform:  + access_network = false 2025-05-14 01:41:23.339409 | orchestrator | 01:41:23.338 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-14 01:41:23.339427 | orchestrator | 01:41:23.338 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-14 01:41:23.339442 | orchestrator | 01:41:23.338 STDOUT terraform:  + mac = (known after apply) 2025-05-14 01:41:23.339458 | orchestrator | 01:41:23.338 STDOUT terraform:  + name = (known after apply) 2025-05-14 01:41:23.339468 | orchestrator | 01:41:23.338 STDOUT terraform:  + port = (known after apply) 2025-05-14 01:41:23.339477 | orchestrator | 01:41:23.338 STDOUT terraform:  + uuid = (known after apply) 2025-05-14 01:41:23.339487 | orchestrator | 01:41:23.338 STDOUT terraform:  } 2025-05-14 01:41:23.339497 | orchestrator | 01:41:23.338 STDOUT terraform:  } 2025-05-14 01:41:23.339507 | orchestrator | 01:41:23.338 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-05-14 01:41:23.339516 | orchestrator | 01:41:23.338 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-14 01:41:23.339526 | orchestrator | 01:41:23.338 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-14 01:41:23.339536 | orchestrator | 01:41:23.338 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-14 01:41:23.339545 | orchestrator | 01:41:23.338 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-14 01:41:23.339562 | orchestrator | 01:41:23.339 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 01:41:23.339572 | orchestrator | 01:41:23.339 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 01:41:23.339581 | orchestrator | 01:41:23.339 STDOUT terraform:  + config_drive = true 2025-05-14 01:41:23.339591 | orchestrator | 01:41:23.339 STDOUT terraform:  + created = (known after apply) 2025-05-14 01:41:23.339600 | orchestrator | 01:41:23.339 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-14 01:41:23.339610 | orchestrator | 01:41:23.339 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-14 01:41:23.339619 | orchestrator | 01:41:23.339 STDOUT terraform:  + force_delete = false 2025-05-14 01:41:23.339629 | orchestrator | 01:41:23.339 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.339639 | orchestrator | 01:41:23.339 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 01:41:23.339648 | orchestrator | 01:41:23.339 STDOUT terraform:  + image_name = (known after apply) 2025-05-14 01:41:23.339658 | orchestrator | 01:41:23.339 STDOUT terraform:  + key_pair = "testbed" 2025-05-14 01:41:23.339672 | orchestrator | 01:41:23.339 STDOUT terraform:  + name = "testbed-node-0" 2025-05-14 01:41:23.339682 | orchestrator | 01:41:23.339 STDOUT terraform:  + power_state = "active" 2025-05-14 01:41:23.339779 | orchestrator | 01:41:23.339 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.339899 | orchestrator | 01:41:23.339 STDOUT terraform:  + security_groups = (known after apply) 2025-05-14 01:41:23.339918 | orchestrator | 01:41:23.339 STDOUT terraform:  + stop_before_destroy = false 2025-05-14 01:41:23.340041 | orchestrator | 01:41:23.339 STDOUT terraform:  + updated = (known after apply) 2025-05-14 01:41:23.340185 | orchestrator | 01:41:23.340 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-14 01:41:23.340202 | orchestrator | 01:41:23.340 STDOUT terraform:  + block_device { 2025-05-14 01:41:23.340267 | orchestrator | 01:41:23.340 STDOUT terraform:  + boot_index = 0 2025-05-14 01:41:23.340339 | orchestrator | 01:41:23.340 STDOUT terraform:  + delete_on_termination = false 2025-05-14 01:41:23.340432 | orchestrator | 01:41:23.340 STDOUT terraform:  + destination_type = "volume" 2025-05-14 01:41:23.340502 | orchestrator | 01:41:23.340 STDOUT terraform:  + multiattach = false 2025-05-14 01:41:23.340627 | orchestrator | 01:41:23.340 STDOUT terraform:  + source_type = "volume" 2025-05-14 01:41:23.340693 | orchestrator | 01:41:23.340 STDOUT terraform:  + uuid = (known after apply) 2025-05-14 01:41:23.340708 | orchestrator | 01:41:23.340 STDOUT terraform:  } 2025-05-14 01:41:23.340757 | orchestrator | 01:41:23.340 STDOUT terraform:  + network { 2025-05-14 01:41:23.340808 | orchestrator | 01:41:23.340 STDOUT terraform:  + access_network = false 2025-05-14 01:41:23.340885 | orchestrator | 01:41:23.340 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-14 01:41:23.341035 | orchestrator | 01:41:23.340 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-14 01:41:23.341101 | orchestrator | 01:41:23.341 STDOUT terraform:  + mac = (known after apply) 2025-05-14 01:41:23.341188 | orchestrator | 01:41:23.341 STDOUT terraform:  + name = (known after apply) 2025-05-14 01:41:23.341262 | orchestrator | 01:41:23.341 STDOUT terraform:  + port = (known after apply) 2025-05-14 01:41:23.341340 | orchestrator | 01:41:23.341 STDOUT terraform:  + uuid = (known after apply) 2025-05-14 01:41:23.341353 | orchestrator | 01:41:23.341 STDOUT terraform:  } 2025-05-14 01:41:23.341436 | orchestrator | 01:41:23.341 STDOUT terraform:  } 2025-05-14 01:41:23.341549 | orchestrator | 01:41:23.341 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-05-14 01:41:23.341653 | orchestrator | 01:41:23.341 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-14 01:41:23.341740 | orchestrator | 01:41:23.341 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-14 01:41:23.341826 | orchestrator | 01:41:23.341 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-14 01:41:23.341916 | orchestrator | 01:41:23.341 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-14 01:41:23.342004 | orchestrator | 01:41:23.341 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 01:41:23.342095 | orchestrator | 01:41:23.341 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 01:41:23.342146 | orchestrator | 01:41:23.342 STDOUT terraform:  + config_drive = true 2025-05-14 01:41:23.342228 | orchestrator | 01:41:23.342 STDOUT terraform:  + created = (known after apply) 2025-05-14 01:41:23.342333 | orchestrator | 01:41:23.342 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-14 01:41:23.342438 | orchestrator | 01:41:23.342 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-14 01:41:23.342496 | orchestrator | 01:41:23.342 STDOUT terraform:  + force_delete = false 2025-05-14 01:41:23.342585 | orchestrator | 01:41:23.342 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.342698 | orchestrator | 01:41:23.342 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 01:41:23.342796 | orchestrator | 01:41:23.342 STDOUT terraform:  + image_name = (known after apply) 2025-05-14 01:41:23.342859 | orchestrator | 01:41:23.342 STDOUT terraform:  + key_pair = "testbed" 2025-05-14 01:41:23.342944 | orchestrator | 01:41:23.342 STDOUT terraform:  + name = "testbed-node-1" 2025-05-14 01:41:23.343006 | orchestrator | 01:41:23.342 STDOUT terraform:  + power_state = "active" 2025-05-14 01:41:23.343095 | orchestrator | 01:41:23.342 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.343181 | orchestrator | 01:41:23.343 STDOUT terraform:  + security_groups = (known after apply) 2025-05-14 01:41:23.343240 | orchestrator | 01:41:23.343 STDOUT terraform:  + stop_before_destroy = false 2025-05-14 01:41:23.343330 | orchestrator | 01:41:23.343 STDOUT terraform:  + updated = (known after apply) 2025-05-14 01:41:23.343521 | orchestrator | 01:41:23.343 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-14 01:41:23.343539 | orchestrator | 01:41:23.343 STDOUT terraform:  + block_device { 2025-05-14 01:41:23.343613 | orchestrator | 01:41:23.343 STDOUT terraform:  + boot_index = 0 2025-05-14 01:41:23.343682 | orchestrator | 01:41:23.343 STDOUT terraform:  + delete_on_termination = false 2025-05-14 01:41:23.343757 | orchestrator | 01:41:23.343 STDOUT terraform:  + destination_type = "volume" 2025-05-14 01:41:23.343839 | orchestrator | 01:41:23.343 STDOUT terraform:  + multiattach = false 2025-05-14 01:41:23.343918 | orchestrator | 01:41:23.343 STDOUT terraform:  + source_type = "volume" 2025-05-14 01:41:23.344008 | orchestrator | 01:41:23.343 STDOUT terraform:  + uuid = (known after apply) 2025-05-14 01:41:23.344020 | orchestrator | 01:41:23.343 STDOUT terraform:  } 2025-05-14 01:41:23.344071 | orchestrator | 01:41:23.344 STDOUT terraform:  + network { 2025-05-14 01:41:23.344173 | orchestrator | 01:41:23.344 STDOUT terraform:  + access_network = false 2025-05-14 01:41:23.344255 | orchestrator | 01:41:23.344 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-14 01:41:23.344332 | orchestrator | 01:41:23.344 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-14 01:41:23.344428 | orchestrator | 01:41:23.344 STDOUT terraform:  + mac = (known after apply) 2025-05-14 01:41:23.344504 | orchestrator | 01:41:23.344 STDOUT terraform:  + name = (known after apply) 2025-05-14 01:41:23.344581 | orchestrator | 01:41:23.344 STDOUT terraform:  + port = (known after apply) 2025-05-14 01:41:23.344661 | orchestrator | 01:41:23.344 STDOUT terraform:  + uuid = (known after apply) 2025-05-14 01:41:23.344674 | orchestrator | 01:41:23.344 STDOUT terraform:  } 2025-05-14 01:41:23.344721 | orchestrator | 01:41:23.344 STDOUT terraform:  } 2025-05-14 01:41:23.344989 | orchestrator | 01:41:23.344 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-05-14 01:41:23.345094 | orchestrator | 01:41:23.344 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-14 01:41:23.345399 | orchestrator | 01:41:23.345 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-14 01:41:23.345433 | orchestrator | 01:41:23.345 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-14 01:41:23.345442 | orchestrator | 01:41:23.345 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-14 01:41:23.345497 | orchestrator | 01:41:23.345 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 01:41:23.349599 | orchestrator | 01:41:23.345 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 01:41:23.349663 | orchestrator | 01:41:23.349 STDOUT terraform:  + config_drive = true 2025-05-14 01:41:23.349683 | orchestrator | 01:41:23.349 STDOUT terraform:  + created = (known after apply) 2025-05-14 01:41:23.349722 | orchestrator | 01:41:23.349 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-14 01:41:23.349761 | orchestrator | 01:41:23.349 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-14 01:41:23.349782 | orchestrator | 01:41:23.349 STDOUT terraform:  + force_delete = false 2025-05-14 01:41:23.349824 | orchestrator | 01:41:23.349 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.349863 | orchestrator | 01:41:23.349 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 01:41:23.349927 | orchestrator | 01:41:23.349 STDOUT terraform:  + image_name = (known after apply) 2025-05-14 01:41:23.349936 | orchestrator | 01:41:23.349 STDOUT terraform:  + key_pair = "testbed" 2025-05-14 01:41:23.349958 | orchestrator | 01:41:23.349 STDOUT terraform:  + name = "testbed-node-2" 2025-05-14 01:41:23.349983 | orchestrator | 01:41:23.349 STDOUT terraform:  + power_state = "active" 2025-05-14 01:41:23.350036 | orchestrator | 01:41:23.349 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.350104 | orchestrator | 01:41:23.350 STDOUT terraform:  + security_groups = (known after apply) 2025-05-14 01:41:23.350119 | orchestrator | 01:41:23.350 STDOUT terraform:  + stop_before_destroy = false 2025-05-14 01:41:23.350166 | orchestrator | 01:41:23.350 STDOUT terraform:  + updated = (known after apply) 2025-05-14 01:41:23.350239 | orchestrator | 01:41:23.350 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-14 01:41:23.350250 | orchestrator | 01:41:23.350 STDOUT terraform:  + block_device { 2025-05-14 01:41:23.350277 | orchestrator | 01:41:23.350 STDOUT terraform:  + boot_index = 0 2025-05-14 01:41:23.350321 | orchestrator | 01:41:23.350 STDOUT terraform:  + delete_on_termination = false 2025-05-14 01:41:23.350344 | orchestrator | 01:41:23.350 STDOUT terraform:  + destination_type = "volume" 2025-05-14 01:41:23.350391 | orchestrator | 01:41:23.350 STDOUT terraform:  + multiattach = false 2025-05-14 01:41:23.350644 | orchestrator | 01:41:23.350 STDOUT terraform:  + source_type = "volume" 2025-05-14 01:41:23.350741 | orchestrator | 01:41:23.350 STDOUT terraform:  + uuid = (known after apply) 2025-05-14 01:41:23.350758 | orchestrator | 01:41:23.350 STDOUT terraform:  } 2025-05-14 01:41:23.350771 | orchestrator | 01:41:23.350 STDOUT terraform:  + network { 2025-05-14 01:41:23.350783 | orchestrator | 01:41:23.350 STDOUT terraform:  + access_network = false 2025-05-14 01:41:23.350794 | orchestrator | 01:41:23.350 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-14 01:41:23.350817 | orchestrator | 01:41:23.350 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-14 01:41:23.350829 | orchestrator | 01:41:23.350 STDOUT terraform:  + mac = (known after apply) 2025-05-14 01:41:23.350840 | orchestrator | 01:41:23.350 STDOUT terraform:  + name = (known after apply) 2025-05-14 01:41:23.350851 | orchestrator | 01:41:23.350 STDOUT terraform:  + port = (known after apply) 2025-05-14 01:41:23.350862 | orchestrator | 01:41:23.350 STDOUT terraform:  + uuid = (known after apply) 2025-05-14 01:41:23.350873 | orchestrator | 01:41:23.350 STDOUT terraform:  } 2025-05-14 01:41:23.350884 | orchestrator | 01:41:23.350 STDOUT terraform:  } 2025-05-14 01:41:23.350896 | orchestrator | 01:41:23.350 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-05-14 01:41:23.350911 | orchestrator | 01:41:23.350 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-14 01:41:23.350926 | orchestrator | 01:41:23.350 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-14 01:41:23.350996 | orchestrator | 01:41:23.350 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-14 01:41:23.351029 | orchestrator | 01:41:23.350 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-14 01:41:23.351098 | orchestrator | 01:41:23.351 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 01:41:23.351143 | orchestrator | 01:41:23.351 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 01:41:23.351155 | orchestrator | 01:41:23.351 STDOUT terraform:  + config_drive = true 2025-05-14 01:41:23.351170 | orchestrator | 01:41:23.351 STDOUT terraform:  + created = (known after apply) 2025-05-14 01:41:23.351221 | orchestrator | 01:41:23.351 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-14 01:41:23.351244 | orchestrator | 01:41:23.351 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-14 01:41:23.351270 | orchestrator | 01:41:23.351 STDOUT terraform:  + force_delete = false 2025-05-14 01:41:23.351295 | orchestrator | 01:41:23.351 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.351321 | orchestrator | 01:41:23.351 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 01:41:23.351460 | orchestrator | 01:41:23.351 STDOUT terraform:  + image_name = (known after apply) 2025-05-14 01:41:23.351525 | orchestrator | 01:41:23.351 STDOUT terraform:  + key_pair = "testbed" 2025-05-14 01:41:23.351546 | orchestrator | 01:41:23.351 STDOUT terraform:  + name = "testbed-node-3" 2025-05-14 01:41:23.351565 | orchestrator | 01:41:23.351 STDOUT terraform:  + power_state = "active" 2025-05-14 01:41:23.351591 | orchestrator | 01:41:23.351 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.351610 | orchestrator | 01:41:23.351 STDOUT terraform:  + security_groups = (known after apply) 2025-05-14 01:41:23.351630 | orchestrator | 01:41:23.351 STDOUT terraform:  + stop_before_destroy = false 2025-05-14 01:41:23.351655 | orchestrator | 01:41:23.351 STDOUT terraform:  + updated = (known after apply) 2025-05-14 01:41:23.351673 | orchestrator | 01:41:23.351 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-14 01:41:23.351700 | orchestrator | 01:41:23.351 STDOUT terraform:  + block_device { 2025-05-14 01:41:23.351717 | orchestrator | 01:41:23.351 STDOUT terraform:  + boot_index = 0 2025-05-14 01:41:23.351731 | orchestrator | 01:41:23.351 STDOUT terraform:  + delete_on_termination = false 2025-05-14 01:41:23.351789 | orchestrator | 01:41:23.351 STDOUT terraform:  + destination_type = "volume" 2025-05-14 01:41:23.351806 | orchestrator | 01:41:23.351 STDOUT terraform:  + multiattach = false 2025-05-14 01:41:23.351868 | orchestrator | 01:41:23.351 STDOUT terraform:  + source_type = "volume" 2025-05-14 01:41:23.351885 | orchestrator | 01:41:23.351 STDOUT terraform:  + uuid = (known after apply) 2025-05-14 01:41:23.351897 | orchestrator | 01:41:23.351 STDOUT terraform:  } 2025-05-14 01:41:23.351912 | orchestrator | 01:41:23.351 STDOUT terraform:  + network { 2025-05-14 01:41:23.351926 | orchestrator | 01:41:23.351 STDOUT terraform:  + access_network = false 2025-05-14 01:41:23.351956 | orchestrator | 01:41:23.351 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-14 01:41:23.351988 | orchestrator | 01:41:23.351 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-14 01:41:23.352020 | orchestrator | 01:41:23.351 STDOUT terraform:  + mac = (known after apply) 2025-05-14 01:41:23.352053 | orchestrator | 01:41:23.352 STDOUT terraform:  + name = (known after apply) 2025-05-14 01:41:23.352091 | orchestrator | 01:41:23.352 STDOUT terraform:  + port = (known after apply) 2025-05-14 01:41:23.352115 | orchestrator | 01:41:23.352 STDOUT terraform:  + uuid = (known after apply) 2025-05-14 01:41:23.352132 | orchestrator | 01:41:23.352 STDOUT terraform:  } 2025-05-14 01:41:23.352153 | orchestrator | 01:41:23.352 STDOUT terraform:  } 2025-05-14 01:41:23.352179 | orchestrator | 01:41:23.352 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-05-14 01:41:23.352206 | orchestrator | 01:41:23.352 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-14 01:41:23.352255 | orchestrator | 01:41:23.352 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-14 01:41:23.352281 | orchestrator | 01:41:23.352 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-14 01:41:23.352317 | orchestrator | 01:41:23.352 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-14 01:41:23.352333 | orchestrator | 01:41:23.352 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 01:41:23.352390 | orchestrator | 01:41:23.352 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 01:41:23.352408 | orchestrator | 01:41:23.352 STDOUT terraform:  + config_drive = true 2025-05-14 01:41:23.352453 | orchestrator | 01:41:23.352 STDOUT terraform:  + created = (known after apply) 2025-05-14 01:41:23.352489 | orchestrator | 01:41:23.352 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-14 01:41:23.352512 | orchestrator | 01:41:23.352 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-14 01:41:23.352539 | orchestrator | 01:41:23.352 STDOUT terraform:  + force_delete = false 2025-05-14 01:41:23.352566 | orchestrator | 01:41:23.352 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.352606 | orchestrator | 01:41:23.352 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 01:41:23.352641 | orchestrator | 01:41:23.352 STDOUT terraform:  + image_name = (known after apply) 2025-05-14 01:41:23.352658 | orchestrator | 01:41:23.352 STDOUT terraform:  + key_pair = "testbed" 2025-05-14 01:41:23.352695 | orchestrator | 01:41:23.352 STDOUT terraform:  + name = "testbed-node-4" 2025-05-14 01:41:23.352726 | orchestrator | 01:41:23.352 STDOUT terraform:  + power_state = "active" 2025-05-14 01:41:23.352764 | orchestrator | 01:41:23.352 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.352813 | orchestrator | 01:41:23.352 STDOUT terraform:  + security_groups = (known after apply) 2025-05-14 01:41:23.352829 | orchestrator | 01:41:23.352 STDOUT terraform:  + stop_before_destroy = false 2025-05-14 01:41:23.352845 | orchestrator | 01:41:23.352 STDOUT terraform:  + updated = (known after apply) 2025-05-14 01:41:23.352903 | orchestrator | 01:41:23.352 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-14 01:41:23.352922 | orchestrator | 01:41:23.352 STDOUT terraform:  + block_device { 2025-05-14 01:41:23.352938 | orchestrator | 01:41:23.352 STDOUT terraform:  + boot_index = 0 2025-05-14 01:41:23.352982 | orchestrator | 01:41:23.352 STDOUT terraform:  + delete_on_termination = false 2025-05-14 01:41:23.353009 | orchestrator | 01:41:23.352 STDOUT terraform:  + destination_type = "volume" 2025-05-14 01:41:23.353064 | orchestrator | 01:41:23.352 STDOUT terraform:  + multiattach = false 2025-05-14 01:41:23.353085 | orchestrator | 01:41:23.353 STDOUT terraform:  + source_type = "volume" 2025-05-14 01:41:23.353112 | orchestrator | 01:41:23.353 STDOUT terraform:  + uuid = (known after apply) 2025-05-14 01:41:23.353133 | orchestrator | 01:41:23.353 STDOUT terraform:  } 2025-05-14 01:41:23.353155 | orchestrator | 01:41:23.353 STDOUT terraform:  + network { 2025-05-14 01:41:23.353180 | orchestrator | 01:41:23.353 STDOUT terraform:  + access_network = false 2025-05-14 01:41:23.353202 | orchestrator | 01:41:23.353 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-14 01:41:23.353235 | orchestrator | 01:41:23.353 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-14 01:41:23.353258 | orchestrator | 01:41:23.353 STDOUT terraform:  + mac = (known after apply) 2025-05-14 01:41:23.353272 | orchestrator | 01:41:23.353 STDOUT terraform:  + name = (known after apply) 2025-05-14 01:41:23.353284 | orchestrator | 01:41:23.353 STDOUT terraform:  + port = (known after apply) 2025-05-14 01:41:23.353298 | orchestrator | 01:41:23.353 STDOUT terraform:  + uuid = (known after apply) 2025-05-14 01:41:23.353310 | orchestrator | 01:41:23.353 STDOUT terraform:  } 2025-05-14 01:41:23.353321 | orchestrator | 01:41:23.353 STDOUT terraform:  } 2025-05-14 01:41:23.353336 | orchestrator | 01:41:23.353 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-05-14 01:41:23.353413 | orchestrator | 01:41:23.353 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-14 01:41:23.353434 | orchestrator | 01:41:23.353 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-14 01:41:23.353479 | orchestrator | 01:41:23.353 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-14 01:41:23.353510 | orchestrator | 01:41:23.353 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-14 01:41:23.353540 | orchestrator | 01:41:23.353 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 01:41:23.353556 | orchestrator | 01:41:23.353 STDOUT terraform:  + availability_zone = "nova" 2025-05-14 01:41:23.353580 | orchestrator | 01:41:23.353 STDOUT terraform:  + config_drive = true 2025-05-14 01:41:23.353655 | orchestrator | 01:41:23.353 STDOUT terraform:  + created = (known after apply) 2025-05-14 01:41:23.353670 | orchestrator | 01:41:23.353 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-14 01:41:23.353686 | orchestrator | 01:41:23.353 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-14 01:41:23.353697 | orchestrator | 01:41:23.353 STDOUT terraform:  + force_delete = false 2025-05-14 01:41:23.353730 | orchestrator | 01:41:23.353 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.353776 | orchestrator | 01:41:23.353 STDOUT terraform:  + image_id = (known after apply) 2025-05-14 01:41:23.353793 | orchestrator | 01:41:23.353 STDOUT terraform:  + image_name = (known after apply) 2025-05-14 01:41:23.353817 | orchestrator | 01:41:23.353 STDOUT terraform:  + key_pair = "testbed" 2025-05-14 01:41:23.353842 | orchestrator | 01:41:23.353 STDOUT terraform:  + name = "testbed-node-5" 2025-05-14 01:41:23.353863 | orchestrator | 01:41:23.353 STDOUT terraform:  + power_state = "active" 2025-05-14 01:41:23.353909 | orchestrator | 01:41:23.353 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.353936 | orchestrator | 01:41:23.353 STDOUT terraform:  + security_groups = (known after apply) 2025-05-14 01:41:23.353961 | orchestrator | 01:41:23.353 STDOUT terraform:  + stop_before_destroy = false 2025-05-14 01:41:23.353987 | orchestrator | 01:41:23.353 STDOUT terraform:  + updated = (known after apply) 2025-05-14 01:41:23.354090 | orchestrator | 01:41:23.353 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-14 01:41:23.354131 | orchestrator | 01:41:23.354 STDOUT terraform:  + block_device { 2025-05-14 01:41:23.354159 | orchestrator | 01:41:23.354 STDOUT terraform:  + boot_index = 0 2025-05-14 01:41:23.354178 | orchestrator | 01:41:23.354 STDOUT terraform:  + delete_on_termination = false 2025-05-14 01:41:23.354190 | orchestrator | 01:41:23.354 STDOUT terraform:  + destination_type = "volume" 2025-05-14 01:41:23.354204 | orchestrator | 01:41:23.354 STDOUT terraform:  + multiattach = false 2025-05-14 01:41:23.354216 | orchestrator | 01:41:23.354 STDOUT terraform:  + source_type = "volume" 2025-05-14 01:41:23.354253 | orchestrator | 01:41:23.354 STDOUT terraform:  + uuid = (known after apply) 2025-05-14 01:41:23.354270 | orchestrator | 01:41:23.354 STDOUT terraform:  } 2025-05-14 01:41:23.354285 | orchestrator | 01:41:23.354 STDOUT terraform:  + network { 2025-05-14 01:41:23.354300 | orchestrator | 01:41:23.354 STDOUT terraform:  + access_network = false 2025-05-14 01:41:23.354334 | orchestrator | 01:41:23.354 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-14 01:41:23.354400 | orchestrator | 01:41:23.354 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-14 01:41:23.354450 | orchestrator | 01:41:23.354 STDOUT terraform:  + mac = (known after apply) 2025-05-14 01:41:23.354467 | orchestrator | 01:41:23.354 STDOUT terraform:  + name = (known after apply) 2025-05-14 01:41:23.354513 | orchestrator | 01:41:23.354 STDOUT terraform:  + port = (known after apply) 2025-05-14 01:41:23.354530 | orchestrator | 01:41:23.354 STDOUT terraform:  + uuid = (known after apply) 2025-05-14 01:41:23.354544 | orchestrator | 01:41:23.354 STDOUT terraform:  } 2025-05-14 01:41:23.354568 | orchestrator | 01:41:23.354 STDOUT terraform:  } 2025-05-14 01:41:23.354583 | orchestrator | 01:41:23.354 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-05-14 01:41:23.354623 | orchestrator | 01:41:23.354 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-05-14 01:41:23.354646 | orchestrator | 01:41:23.354 STDOUT terraform:  + fingerprint = (known after apply) 2025-05-14 01:41:23.354669 | orchestrator | 01:41:23.354 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.354690 | orchestrator | 01:41:23.354 STDOUT terraform:  + name = "testbed" 2025-05-14 01:41:23.354712 | orchestrator | 01:41:23.354 STDOUT terraform:  + private_key = (sensitive value) 2025-05-14 01:41:23.354736 | orchestrator | 01:41:23.354 STDOUT terraform:  + public_key = (known after apply) 2025-05-14 01:41:23.354761 | orchestrator | 01:41:23.354 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.354787 | orchestrator | 01:41:23.354 STDOUT terraform:  + user_id = (known after apply) 2025-05-14 01:41:23.354807 | orchestrator | 01:41:23.354 STDOUT terraform:  } 2025-05-14 01:41:23.354855 | orchestrator | 01:41:23.354 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-05-14 01:41:23.354909 | orchestrator | 01:41:23.354 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-14 01:41:23.354936 | orchestrator | 01:41:23.354 STDOUT terraform:  + device = (known after apply) 2025-05-14 01:41:23.354951 | orchestrator | 01:41:23.354 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.354989 | orchestrator | 01:41:23.354 STDOUT terraform:  + instance_id = (known after apply) 2025-05-14 01:41:23.355015 | orchestrator | 01:41:23.354 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.355041 | orchestrator | 01:41:23.355 STDOUT terraform:  + volume_id = (known after apply) 2025-05-14 01:41:23.355061 | orchestrator | 01:41:23.355 STDOUT terraform:  } 2025-05-14 01:41:23.355107 | orchestrator | 01:41:23.355 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-05-14 01:41:23.355162 | orchestrator | 01:41:23.355 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-14 01:41:23.355192 | orchestrator | 01:41:23.355 STDOUT terraform:  + device = (known after apply) 2025-05-14 01:41:23.355213 | orchestrator | 01:41:23.355 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.355235 | orchestrator | 01:41:23.355 STDOUT terraform:  + instance_id = (known after apply) 2025-05-14 01:41:23.355271 | orchestrator | 01:41:23.355 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.355303 | orchestrator | 01:41:23.355 STDOUT terraform:  + volume_id = (known after apply) 2025-05-14 01:41:23.355324 | orchestrator | 01:41:23.355 STDOUT terraform:  } 2025-05-14 01:41:23.355463 | orchestrator | 01:41:23.355 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-05-14 01:41:23.355489 | orchestrator | 01:41:23.355 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-14 01:41:23.355508 | orchestrator | 01:41:23.355 STDOUT terraform:  + device = (known after apply) 2025-05-14 01:41:23.355519 | orchestrator | 01:41:23.355 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.355534 | orchestrator | 01:41:23.355 STDOUT terraform:  + instance_id = (known after apply) 2025-05-14 01:41:23.355548 | orchestrator | 01:41:23.355 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.355562 | orchestrator | 01:41:23.355 STDOUT terraform:  + volume_id = (known after apply) 2025-05-14 01:41:23.355576 | orchestrator | 01:41:23.355 STDOUT terraform:  } 2025-05-14 01:41:23.355629 | orchestrator | 01:41:23.355 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-05-14 01:41:23.355678 | orchestrator | 01:41:23.355 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-14 01:41:23.355716 | orchestrator | 01:41:23.355 STDOUT terraform:  + device = (known after apply) 2025-05-14 01:41:23.355747 | orchestrator | 01:41:23.355 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.355778 | orchestrator | 01:41:23.355 STDOUT terraform:  + instance_id = (known after apply) 2025-05-14 01:41:23.355809 | orchestrator | 01:41:23.355 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.355851 | orchestrator | 01:41:23.355 STDOUT terraform:  + volume_id = (known after apply) 2025-05-14 01:41:23.355876 | orchestrator | 01:41:23.355 STDOUT terraform:  } 2025-05-14 01:41:23.355917 | orchestrator | 01:41:23.355 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-05-14 01:41:23.355942 | orchestrator | 01:41:23.355 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-14 01:41:23.355967 | orchestrator | 01:41:23.355 STDOUT terraform:  + device = (known after apply) 2025-05-14 01:41:23.355986 | orchestrator | 01:41:23.355 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.356014 | orchestrator | 01:41:23.355 STDOUT terraform:  + instance_id = (known after apply) 2025-05-14 01:41:23.356043 | orchestrator | 01:41:23.356 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.356072 | orchestrator | 01:41:23.356 STDOUT terraform:  + volume_id = (known after apply) 2025-05-14 01:41:23.356085 | orchestrator | 01:41:23.356 STDOUT terraform:  } 2025-05-14 01:41:23.356132 | orchestrator | 01:41:23.356 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-05-14 01:41:23.356182 | orchestrator | 01:41:23.356 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-14 01:41:23.356210 | orchestrator | 01:41:23.356 STDOUT terraform:  + device = (known after apply) 2025-05-14 01:41:23.356239 | orchestrator | 01:41:23.356 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.356269 | orchestrator | 01:41:23.356 STDOUT terraform:  + instance_id = (known after apply) 2025-05-14 01:41:23.356298 | orchestrator | 01:41:23.356 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.356326 | orchestrator | 01:41:23.356 STDOUT terraform:  + volume_id = (known after apply) 2025-05-14 01:41:23.356349 | orchestrator | 01:41:23.356 STDOUT terraform:  } 2025-05-14 01:41:23.356394 | orchestrator | 01:41:23.356 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-05-14 01:41:23.356453 | orchestrator | 01:41:23.356 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-14 01:41:23.356478 | orchestrator | 01:41:23.356 STDOUT terraform:  + device = (known after apply) 2025-05-14 01:41:23.356499 | orchestrator | 01:41:23.356 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.356519 | orchestrator | 01:41:23.356 STDOUT terraform:  + instance_id = (known after apply) 2025-05-14 01:41:23.356541 | orchestrator | 01:41:23.356 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.356577 | orchestrator | 01:41:23.356 STDOUT terraform:  + volume_id = (known after apply) 2025-05-14 01:41:23.356596 | orchestrator | 01:41:23.356 STDOUT terraform:  } 2025-05-14 01:41:23.356638 | orchestrator | 01:41:23.356 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-05-14 01:41:23.356695 | orchestrator | 01:41:23.356 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-14 01:41:23.356718 | orchestrator | 01:41:23.356 STDOUT terraform:  + device = (known after apply) 2025-05-14 01:41:23.356747 | orchestrator | 01:41:23.356 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.356760 | orchestrator | 01:41:23.356 STDOUT terraform:  + instance_id = (known after apply) 2025-05-14 01:41:23.356785 | orchestrator | 01:41:23.356 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.356819 | orchestrator | 01:41:23.356 STDOUT terraform:  + volume_id = (known after apply) 2025-05-14 01:41:23.356833 | orchestrator | 01:41:23.356 STDOUT terraform:  } 2025-05-14 01:41:23.356884 | orchestrator | 01:41:23.356 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-05-14 01:41:23.356928 | orchestrator | 01:41:23.356 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-14 01:41:23.356961 | orchestrator | 01:41:23.356 STDOUT terraform:  + device = (known after apply) 2025-05-14 01:41:23.357049 | orchestrator | 01:41:23.356 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.357093 | orchestrator | 01:41:23.357 STDOUT terraform:  + instance_id = (known after apply) 2025-05-14 01:41:23.357123 | orchestrator | 01:41:23.357 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.357154 | orchestrator | 01:41:23.357 STDOUT terraform:  + volume_id = (known after apply) 2025-05-14 01:41:23.357168 | orchestrator | 01:41:23.357 STDOUT terraform:  } 2025-05-14 01:41:23.357245 | orchestrator | 01:41:23.357 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-05-14 01:41:23.357298 | orchestrator | 01:41:23.357 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-05-14 01:41:23.357312 | orchestrator | 01:41:23.357 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-14 01:41:23.357350 | orchestrator | 01:41:23.357 STDOUT terraform:  + floating_ip = (known after apply) 2025-05-14 01:41:23.357412 | orchestrator | 01:41:23.357 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.357442 | orchestrator | 01:41:23.357 STDOUT terraform:  + port_id = (known after apply) 2025-05-14 01:41:23.357480 | orchestrator | 01:41:23.357 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.357491 | orchestrator | 01:41:23.357 STDOUT terraform:  } 2025-05-14 01:41:23.357528 | orchestrator | 01:41:23.357 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-05-14 01:41:23.357576 | orchestrator | 01:41:23.357 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-05-14 01:41:23.357606 | orchestrator | 01:41:23.357 STDOUT terraform:  + address = (known after apply) 2025-05-14 01:41:23.357627 | orchestrator | 01:41:23.357 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 01:41:23.357647 | orchestrator | 01:41:23.357 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-14 01:41:23.357667 | orchestrator | 01:41:23.357 STDOUT terraform:  + dns_name = (known after apply) 2025-05-14 01:41:23.357689 | orchestrator | 01:41:23.357 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-14 01:41:23.357709 | orchestrator | 01:41:23.357 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.357753 | orchestrator | 01:41:23.357 STDOUT terraform:  + pool = "public" 2025-05-14 01:41:23.357769 | orchestrator | 01:41:23.357 STDOUT terraform:  + port_id = (known after apply) 2025-05-14 01:41:23.357792 | orchestrator | 01:41:23.357 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.357823 | orchestrator | 01:41:23.357 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-14 01:41:23.357837 | orchestrator | 01:41:23.357 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 01:41:23.357850 | orchestrator | 01:41:23.357 STDOUT terraform:  } 2025-05-14 01:41:23.357915 | orchestrator | 01:41:23.357 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-05-14 01:41:23.357949 | orchestrator | 01:41:23.357 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-05-14 01:41:23.357987 | orchestrator | 01:41:23.357 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-14 01:41:23.358059 | orchestrator | 01:41:23.357 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 01:41:23.358077 | orchestrator | 01:41:23.358 STDOUT terraform:  + availability_zone_hints = [ 2025-05-14 01:41:23.358087 | orchestrator | 01:41:23.358 STDOUT terraform:  + "nova", 2025-05-14 01:41:23.358100 | orchestrator | 01:41:23.358 STDOUT terraform:  ] 2025-05-14 01:41:23.358127 | orchestrator | 01:41:23.358 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-14 01:41:23.358164 | orchestrator | 01:41:23.358 STDOUT terraform:  + external = (known after apply) 2025-05-14 01:41:23.358203 | orchestrator | 01:41:23.358 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.358241 | orchestrator | 01:41:23.358 STDOUT terraform:  + mtu = (known after apply) 2025-05-14 01:41:23.358282 | orchestrator | 01:41:23.358 STDOUT terraform:  + name = "net-testbed-management" 2025-05-14 01:41:23.358319 | orchestrator | 01:41:23.358 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-14 01:41:23.358357 | orchestrator | 01:41:23.358 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-14 01:41:23.358396 | orchestrator | 01:41:23.358 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.358441 | orchestrator | 01:41:23.358 STDOUT terraform:  + shared = (known after apply) 2025-05-14 01:41:23.358485 | orchestrator | 01:41:23.358 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 01:41:23.358529 | orchestrator | 01:41:23.358 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-05-14 01:41:23.358544 | orchestrator | 01:41:23.358 STDOUT terraform:  + segments (known after apply) 2025-05-14 01:41:23.358557 | orchestrator | 01:41:23.358 STDOUT terraform:  } 2025-05-14 01:41:23.358604 | orchestrator | 01:41:23.358 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-05-14 01:41:23.358650 | orchestrator | 01:41:23.358 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-05-14 01:41:23.358708 | orchestrator | 01:41:23.358 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-14 01:41:23.358733 | orchestrator | 01:41:23.358 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-14 01:41:23.358756 | orchestrator | 01:41:23.358 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-14 01:41:23.358777 | orchestrator | 01:41:23.358 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 01:41:23.358810 | orchestrator | 01:41:23.358 STDOUT terraform:  + device_id = (known after apply) 2025-05-14 01:41:23.358849 | orchestrator | 01:41:23.358 STDOUT terraform:  + device_owner = (known after apply) 2025-05-14 01:41:23.358886 | orchestrator | 01:41:23.358 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-14 01:41:23.358935 | orchestrator | 01:41:23.358 STDOUT terraform:  + dns_name = (known after apply) 2025-05-14 01:41:23.358976 | orchestrator | 01:41:23.358 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.359001 | orchestrator | 01:41:23.358 STDOUT terraform:  + mac_address = (known after apply) 2025-05-14 01:41:23.359048 | orchestrator | 01:41:23.358 STDOUT terraform:  + network_id = (known after apply) 2025-05-14 01:41:23.359076 | orchestrator | 01:41:23.359 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-14 01:41:23.359112 | orchestrator | 01:41:23.359 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-14 01:41:23.359151 | orchestrator | 01:41:23.359 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.359196 | orchestrator | 01:41:23.359 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-14 01:41:23.359234 | orchestrator | 01:41:23.359 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 01:41:23.359247 | orchestrator | 01:41:23.359 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:41:23.359281 | orchestrator | 01:41:23.359 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-14 01:41:23.359297 | orchestrator | 01:41:23.359 STDOUT terraform:  } 2025-05-14 01:41:23.359319 | orchestrator | 01:41:23.359 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:41:23.359340 | orchestrator | 01:41:23.359 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-14 01:41:23.359353 | orchestrator | 01:41:23.359 STDOUT terraform:  } 2025-05-14 01:41:23.359410 | orchestrator | 01:41:23.359 STDOUT terraform:  + binding (known after apply) 2025-05-14 01:41:23.359422 | orchestrator | 01:41:23.359 STDOUT terraform:  + fixed_ip { 2025-05-14 01:41:23.359435 | orchestrator | 01:41:23.359 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-05-14 01:41:23.359448 | orchestrator | 01:41:23.359 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-14 01:41:23.359461 | orchestrator | 01:41:23.359 STDOUT terraform:  } 2025-05-14 01:41:23.359473 | orchestrator | 01:41:23.359 STDOUT terraform:  } 2025-05-14 01:41:23.359530 | orchestrator | 01:41:23.359 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-05-14 01:41:23.359575 | orchestrator | 01:41:23.359 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-14 01:41:23.359612 | orchestrator | 01:41:23.359 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-14 01:41:23.359640 | orchestrator | 01:41:23.359 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-14 01:41:23.359680 | orchestrator | 01:41:23.359 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-14 01:41:23.359723 | orchestrator | 01:41:23.359 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 01:41:23.359758 | orchestrator | 01:41:23.359 STDOUT terraform:  + device_id = (known after apply) 2025-05-14 01:41:23.359798 | orchestrator | 01:41:23.359 STDOUT terraform:  + device_owner = (known after apply) 2025-05-14 01:41:23.359820 | orchestrator | 01:41:23.359 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-14 01:41:23.359864 | orchestrator | 01:41:23.359 STDOUT terraform:  + dns_name = (known after apply) 2025-05-14 01:41:23.359900 | orchestrator | 01:41:23.359 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.359937 | orchestrator | 01:41:23.359 STDOUT terraform:  + mac_address = (known after apply) 2025-05-14 01:41:23.359973 | orchestrator | 01:41:23.359 STDOUT terraform:  + network_id = (known after apply) 2025-05-14 01:41:23.360008 | orchestrator | 01:41:23.359 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-14 01:41:23.360045 | orchestrator | 01:41:23.360 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-14 01:41:23.360083 | orchestrator | 01:41:23.360 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.360118 | orchestrator | 01:41:23.360 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-14 01:41:23.360155 | orchestrator | 01:41:23.360 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 01:41:23.360169 | orchestrator | 01:41:23.360 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:41:23.360201 | orchestrator | 01:41:23.360 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-14 01:41:23.360215 | orchestrator | 01:41:23.360 STDOUT terraform:  } 2025-05-14 01:41:23.360227 | orchestrator | 01:41:23.360 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:41:23.360257 | orchestrator | 01:41:23.360 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-14 01:41:23.360270 | orchestrator | 01:41:23.360 STDOUT terraform:  } 2025-05-14 01:41:23.360282 | orchestrator | 01:41:23.360 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:41:23.360313 | orchestrator | 01:41:23.360 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-14 01:41:23.360327 | orchestrator | 01:41:23.360 STDOUT terraform:  } 2025-05-14 01:41:23.360339 | orchestrator | 01:41:23.360 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:41:23.360399 | orchestrator | 01:41:23.360 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-14 01:41:23.360415 | orchestrator | 01:41:23.360 STDOUT terraform:  } 2025-05-14 01:41:23.360428 | orchestrator | 01:41:23.360 STDOUT terraform:  + binding (known after apply) 2025-05-14 01:41:23.360440 | orchestrator | 01:41:23.360 STDOUT terraform:  + fixed_ip { 2025-05-14 01:41:23.360454 | orchestrator | 01:41:23.360 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-05-14 01:41:23.360490 | orchestrator | 01:41:23.360 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-14 01:41:23.360504 | orchestrator | 01:41:23.360 STDOUT terraform:  } 2025-05-14 01:41:23.360517 | orchestrator | 01:41:23.360 STDOUT terraform:  } 2025-05-14 01:41:23.360560 | orchestrator | 01:41:23.360 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-05-14 01:41:23.360608 | orchestrator | 01:41:23.360 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-14 01:41:23.360645 | orchestrator | 01:41:23.360 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-14 01:41:23.360682 | orchestrator | 01:41:23.360 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-14 01:41:23.360722 | orchestrator | 01:41:23.360 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-14 01:41:23.360764 | orchestrator | 01:41:23.360 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 01:41:23.360802 | orchestrator | 01:41:23.360 STDOUT terraform:  + device_id = (known after apply) 2025-05-14 01:41:23.360825 | orchestrator | 01:41:23.360 STDOUT terraform:  + device_owner = (known after apply) 2025-05-14 01:41:23.360865 | orchestrator | 01:41:23.360 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-14 01:41:23.360902 | orchestrator | 01:41:23.360 STDOUT terraform:  + dns_name = (known after apply) 2025-05-14 01:41:23.360940 | orchestrator | 01:41:23.360 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.360977 | orchestrator | 01:41:23.360 STDOUT terraform:  + mac_address = (known after apply) 2025-05-14 01:41:23.361014 | orchestrator | 01:41:23.360 STDOUT terraform:  + network_id = (known after apply) 2025-05-14 01:41:23.361050 | orchestrator | 01:41:23.361 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-14 01:41:23.361087 | orchestrator | 01:41:23.361 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-14 01:41:23.361123 | orchestrator | 01:41:23.361 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.361158 | orchestrator | 01:41:23.361 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-14 01:41:23.361195 | orchestrator | 01:41:23.361 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 01:41:23.361209 | orchestrator | 01:41:23.361 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:41:23.361241 | orchestrator | 01:41:23.361 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-14 01:41:23.361254 | orchestrator | 01:41:23.361 STDOUT terraform:  } 2025-05-14 01:41:23.361267 | orchestrator | 01:41:23.361 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:41:23.361298 | orchestrator | 01:41:23.361 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-14 01:41:23.361312 | orchestrator | 01:41:23.361 STDOUT terraform:  } 2025-05-14 01:41:23.361324 | orchestrator | 01:41:23.361 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:41:23.361360 | orchestrator | 01:41:23.361 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-14 01:41:23.361551 | orchestrator | 01:41:23.361 STDOUT terraform:  } 2025-05-14 01:41:23.361568 | orchestrator | 01:41:23.361 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:41:23.361578 | orchestrator | 01:41:23.361 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-14 01:41:23.361588 | orchestrator | 01:41:23.361 STDOUT terraform:  } 2025-05-14 01:41:23.361597 | orchestrator | 01:41:23.361 STDOUT terraform:  + binding (known after apply) 2025-05-14 01:41:23.361619 | orchestrator | 01:41:23.361 STDOUT terraform:  + fixed_ip { 2025-05-14 01:41:23.361629 | orchestrator | 01:41:23.361 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-05-14 01:41:23.361639 | orchestrator | 01:41:23.361 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-14 01:41:23.361649 | orchestrator | 01:41:23.361 STDOUT terraform:  } 2025-05-14 01:41:23.361716 | orchestrator | 01:41:23.361 STDOUT terraform:  } 2025-05-14 01:41:23.361742 | orchestrator | 01:41:23.361 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-05-14 01:41:23.361752 | orchestrator | 01:41:23.361 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-14 01:41:23.361759 | orchestrator | 01:41:23.361 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-14 01:41:23.361766 | orchestrator | 01:41:23.361 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-14 01:41:23.361773 | orchestrator | 01:41:23.361 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-14 01:41:23.361790 | orchestrator | 01:41:23.361 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 01:41:23.361801 | orchestrator | 01:41:23.361 STDOUT terraform:  + device_id = (known after apply) 2025-05-14 01:41:23.361808 | orchestrator | 01:41:23.361 STDOUT terraform:  + device_owner = (known after apply) 2025-05-14 01:41:23.361834 | orchestrator | 01:41:23.361 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-14 01:41:23.361876 | orchestrator | 01:41:23.361 STDOUT terraform:  + dns_name = (known after apply) 2025-05-14 01:41:23.361913 | orchestrator | 01:41:23.361 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.361949 | orchestrator | 01:41:23.361 STDOUT terraform:  + mac_address = (known after apply) 2025-05-14 01:41:23.361986 | orchestrator | 01:41:23.361 STDOUT terraform:  + network_id = (known after apply) 2025-05-14 01:41:23.362044 | orchestrator | 01:41:23.361 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-14 01:41:23.362075 | orchestrator | 01:41:23.362 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-14 01:41:23.362112 | orchestrator | 01:41:23.362 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.362158 | orchestrator | 01:41:23.362 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-14 01:41:23.362194 | orchestrator | 01:41:23.362 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 01:41:23.362216 | orchestrator | 01:41:23.362 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:41:23.362247 | orchestrator | 01:41:23.362 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-14 01:41:23.362256 | orchestrator | 01:41:23.362 STDOUT terraform:  } 2025-05-14 01:41:23.362279 | orchestrator | 01:41:23.362 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:41:23.362310 | orchestrator | 01:41:23.362 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-14 01:41:23.362320 | orchestrator | 01:41:23.362 STDOUT terraform:  } 2025-05-14 01:41:23.362337 | orchestrator | 01:41:23.362 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:41:23.362382 | orchestrator | 01:41:23.362 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-14 01:41:23.362397 | orchestrator | 01:41:23.362 STDOUT terraform:  } 2025-05-14 01:41:23.362406 | orchestrator | 01:41:23.362 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:41:23.362431 | orchestrator | 01:41:23.362 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-14 01:41:23.362441 | orchestrator | 01:41:23.362 STDOUT terraform:  } 2025-05-14 01:41:23.362466 | orchestrator | 01:41:23.362 STDOUT terraform:  + binding (known after apply) 2025-05-14 01:41:23.362475 | orchestrator | 01:41:23.362 STDOUT terraform:  + fixed_ip { 2025-05-14 01:41:23.362502 | orchestrator | 01:41:23.362 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-05-14 01:41:23.362532 | orchestrator | 01:41:23.362 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-14 01:41:23.362542 | orchestrator | 01:41:23.362 STDOUT terraform:  } 2025-05-14 01:41:23.362551 | orchestrator | 01:41:23.362 STDOUT terraform:  } 2025-05-14 01:41:23.362603 | orchestrator | 01:41:23.362 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-05-14 01:41:23.362659 | orchestrator | 01:41:23.362 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-14 01:41:23.362696 | orchestrator | 01:41:23.362 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-14 01:41:23.362707 | orchestrator | 01:41:23.362 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-14 01:41:23.362756 | orchestrator | 01:41:23.362 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-14 01:41:23.362794 | orchestrator | 01:41:23.362 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 01:41:23.362831 | orchestrator | 01:41:23.362 STDOUT terraform:  + device_id = (known after apply) 2025-05-14 01:41:23.362861 | orchestrator | 01:41:23.362 STDOUT terraform:  + device_owner = (known after apply) 2025-05-14 01:41:23.362900 | orchestrator | 01:41:23.362 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-14 01:41:23.362937 | orchestrator | 01:41:23.362 STDOUT terraform:  + dns_name = (known after apply) 2025-05-14 01:41:23.362975 | orchestrator | 01:41:23.362 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.363012 | orchestrator | 01:41:23.362 STDOUT terraform:  + mac_address = (known after apply) 2025-05-14 01:41:23.363042 | orchestrator | 01:41:23.362 STDOUT terraform:  + network_id = (known after apply) 2025-05-14 01:41:23.363082 | orchestrator | 01:41:23.363 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-14 01:41:23.363112 | orchestrator | 01:41:23.363 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-14 01:41:23.363152 | orchestrator | 01:41:23.363 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.363182 | orchestrator | 01:41:23.363 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-14 01:41:23.363220 | orchestrator | 01:41:23.363 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 01:41:23.363238 | orchestrator | 01:41:23.363 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:41:23.363247 | orchestrator | 01:41:23.363 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-14 01:41:23.363286 | orchestrator | 01:41:23.363 STDOUT terraform:  } 2025-05-14 01:41:23.363294 | orchestrator | 01:41:23.363 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:41:23.363303 | orchestrator | 01:41:23.363 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-14 01:41:23.363313 | orchestrator | 01:41:23.363 STDOUT terraform:  } 2025-05-14 01:41:23.363348 | orchestrator | 01:41:23.363 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:41:23.363358 | orchestrator | 01:41:23.363 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-14 01:41:23.363381 | orchestrator | 01:41:23.363 STDOUT terraform:  } 2025-05-14 01:41:23.363427 | orchestrator | 01:41:23.363 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:41:23.363438 | orchestrator | 01:41:23.363 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-14 01:41:23.363474 | orchestrator | 01:41:23.363 STDOUT terraform:  } 2025-05-14 01:41:23.363482 | orchestrator | 01:41:23.363 STDOUT terraform:  + binding (known after apply) 2025-05-14 01:41:23.363492 | orchestrator | 01:41:23.363 STDOUT terraform:  + fixed_ip { 2025-05-14 01:41:23.363520 | orchestrator | 01:41:23.363 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-05-14 01:41:23.363558 | orchestrator | 01:41:23.363 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-14 01:41:23.363565 | orchestrator | 01:41:23.363 STDOUT terraform:  } 2025-05-14 01:41:23.363572 | orchestrator | 01:41:23.363 STDOUT terraform:  } 2025-05-14 01:41:23.363636 | orchestrator | 01:41:23.363 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-05-14 01:41:23.363705 | orchestrator | 01:41:23.363 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-14 01:41:23.363746 | orchestrator | 01:41:23.363 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-14 01:41:23.363784 | orchestrator | 01:41:23.363 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-14 01:41:23.363814 | orchestrator | 01:41:23.363 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-14 01:41:23.363854 | orchestrator | 01:41:23.363 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 01:41:23.363923 | orchestrator | 01:41:23.363 STDOUT terraform:  + device_id = (known after apply) 2025-05-14 01:41:23.363934 | orchestrator | 01:41:23.363 STDOUT terraform:  + device_owner = (known after apply) 2025-05-14 01:41:23.363944 | orchestrator | 01:41:23.363 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-14 01:41:23.363993 | orchestrator | 01:41:23.363 STDOUT terraform:  + dns_name = (known after apply) 2025-05-14 01:41:23.364032 | orchestrator | 01:41:23.363 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.364068 | orchestrator | 01:41:23.364 STDOUT terraform:  + mac_address = (known after apply) 2025-05-14 01:41:23.364113 | orchestrator | 01:41:23.364 STDOUT terraform:  + network_id = (known after apply) 2025-05-14 01:41:23.364132 | orchestrator | 01:41:23.364 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-14 01:41:23.364180 | orchestrator | 01:41:23.364 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-14 01:41:23.364211 | orchestrator | 01:41:23.364 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.364249 | orchestrator | 01:41:23.364 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-14 01:41:23.364279 | orchestrator | 01:41:23.364 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 01:41:23.364288 | orchestrator | 01:41:23.364 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:41:23.364328 | orchestrator | 01:41:23.364 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-14 01:41:23.364336 | orchestrator | 01:41:23.364 STDOUT terraform:  } 2025-05-14 01:41:23.364345 | orchestrator | 01:41:23.364 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:41:23.364397 | orchestrator | 01:41:23.364 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-14 01:41:23.364408 | orchestrator | 01:41:23.364 STDOUT terraform:  } 2025-05-14 01:41:23.364438 | orchestrator | 01:41:23.364 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:41:23.364482 | orchestrator | 01:41:23.364 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-14 01:41:23.364490 | orchestrator | 01:41:23.364 STDOUT terraform:  } 2025-05-14 01:41:23.364499 | orchestrator | 01:41:23.364 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:41:23.364508 | orchestrator | 01:41:23.364 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-14 01:41:23.364518 | orchestrator | 01:41:23.364 STDOUT terraform:  } 2025-05-14 01:41:23.364559 | orchestrator | 01:41:23.364 STDOUT terraform:  + binding (known after apply) 2025-05-14 01:41:23.364567 | orchestrator | 01:41:23.364 STDOUT terraform:  + fixed_ip { 2025-05-14 01:41:23.364603 | orchestrator | 01:41:23.364 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-05-14 01:41:23.364613 | orchestrator | 01:41:23.364 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-14 01:41:23.364622 | orchestrator | 01:41:23.364 STDOUT terraform:  } 2025-05-14 01:41:23.364631 | orchestrator | 01:41:23.364 STDOUT terraform:  } 2025-05-14 01:41:23.364693 | orchestrator | 01:41:23.364 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-05-14 01:41:23.364739 | orchestrator | 01:41:23.364 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-14 01:41:23.364775 | orchestrator | 01:41:23.364 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-14 01:41:23.364813 | orchestrator | 01:41:23.364 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-14 01:41:23.364843 | orchestrator | 01:41:23.364 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-14 01:41:23.364881 | orchestrator | 01:41:23.364 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 01:41:23.364920 | orchestrator | 01:41:23.364 STDOUT terraform:  + device_id = (known after apply) 2025-05-14 01:41:23.364950 | orchestrator | 01:41:23.364 STDOUT terraform:  + device_owner = (known after apply) 2025-05-14 01:41:23.365001 | orchestrator | 01:41:23.364 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-14 01:41:23.365033 | orchestrator | 01:41:23.364 STDOUT terraform:  + dns_name = (known after apply) 2025-05-14 01:41:23.365063 | orchestrator | 01:41:23.365 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.365093 | orchestrator | 01:41:23.365 STDOUT terraform:  + mac_address = (known after apply) 2025-05-14 01:41:23.365133 | orchestrator | 01:41:23.365 STDOUT terraform:  + network_id = (known after apply) 2025-05-14 01:41:23.365170 | orchestrator | 01:41:23.365 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-14 01:41:23.365207 | orchestrator | 01:41:23.365 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-14 01:41:23.365238 | orchestrator | 01:41:23.365 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.365275 | orchestrator | 01:41:23.365 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-14 01:41:23.365311 | orchestrator | 01:41:23.365 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 01:41:23.365321 | orchestrator | 01:41:23.365 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:41:23.365358 | orchestrator | 01:41:23.365 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-14 01:41:23.365424 | orchestrator | 01:41:23.365 STDOUT terraform:  } 2025-05-14 01:41:23.365435 | orchestrator | 01:41:23.365 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:41:23.365442 | orchestrator | 01:41:23.365 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-14 01:41:23.365449 | orchestrator | 01:41:23.365 STDOUT terraform:  } 2025-05-14 01:41:23.365455 | orchestrator | 01:41:23.365 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:41:23.365464 | orchestrator | 01:41:23.365 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-14 01:41:23.365473 | orchestrator | 01:41:23.365 STDOUT terraform:  } 2025-05-14 01:41:23.365513 | orchestrator | 01:41:23.365 STDOUT terraform:  + allowed_address_pairs { 2025-05-14 01:41:23.365521 | orchestrator | 01:41:23.365 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-14 01:41:23.365530 | orchestrator | 01:41:23.365 STDOUT terraform:  } 2025-05-14 01:41:23.365539 | orchestrator | 01:41:23.365 STDOUT terraform:  + binding (known after apply) 2025-05-14 01:41:23.367031 | orchestrator | 01:41:23.365 STDOUT terraform:  + fixed_ip { 2025-05-14 01:41:23.367052 | orchestrator | 01:41:23.365 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-05-14 01:41:23.367060 | orchestrator | 01:41:23.365 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-14 01:41:23.367066 | orchestrator | 01:41:23.365 STDOUT terraform:  } 2025-05-14 01:41:23.367073 | orchestrator | 01:41:23.365 STDOUT terraform:  } 2025-05-14 01:41:23.367080 | orchestrator | 01:41:23.365 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-05-14 01:41:23.367088 | orchestrator | 01:41:23.365 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-05-14 01:41:23.367094 | orchestrator | 01:41:23.365 STDOUT terraform:  + force_destroy = false 2025-05-14 01:41:23.367110 | orchestrator | 01:41:23.365 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.367116 | orchestrator | 01:41:23.365 STDOUT terraform:  + port_id = (known after apply) 2025-05-14 01:41:23.367123 | orchestrator | 01:41:23.365 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.367130 | orchestrator | 01:41:23.365 STDOUT terraform:  + router_id = (known after apply) 2025-05-14 01:41:23.367136 | orchestrator | 01:41:23.365 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-14 01:41:23.367143 | orchestrator | 01:41:23.365 STDOUT terraform:  } 2025-05-14 01:41:23.367150 | orchestrator | 01:41:23.365 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-05-14 01:41:23.367156 | orchestrator | 01:41:23.365 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-05-14 01:41:23.367168 | orchestrator | 01:41:23.365 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-14 01:41:23.367175 | orchestrator | 01:41:23.365 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 01:41:23.367181 | orchestrator | 01:41:23.365 STDOUT terraform:  + availability_zone_hints = [ 2025-05-14 01:41:23.367188 | orchestrator | 01:41:23.365 STDOUT terraform:  + "nova", 2025-05-14 01:41:23.367195 | orchestrator | 01:41:23.366 STDOUT terraform:  ] 2025-05-14 01:41:23.367201 | orchestrator | 01:41:23.366 STDOUT terraform:  + distributed = (known after apply) 2025-05-14 01:41:23.367208 | orchestrator | 01:41:23.366 STDOUT terraform:  + enable_snat = (known after apply) 2025-05-14 01:41:23.367215 | orchestrator | 01:41:23.366 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-05-14 01:41:23.367221 | orchestrator | 01:41:23.366 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.367228 | orchestrator | 01:41:23.366 STDOUT terraform:  + name = "testbed" 2025-05-14 01:41:23.367235 | orchestrator | 01:41:23.366 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.367241 | orchestrator | 01:41:23.366 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 01:41:23.367248 | orchestrator | 01:41:23.366 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-05-14 01:41:23.367255 | orchestrator | 01:41:23.366 STDOUT terraform:  } 2025-05-14 01:41:23.367261 | orchestrator | 01:41:23.366 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-05-14 01:41:23.367270 | orchestrator | 01:41:23.366 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-05-14 01:41:23.367276 | orchestrator | 01:41:23.366 STDOUT terraform:  + description = "ssh" 2025-05-14 01:41:23.367283 | orchestrator | 01:41:23.366 STDOUT terraform:  + direction = "ingress" 2025-05-14 01:41:23.367290 | orchestrator | 01:41:23.366 STDOUT terraform:  + ethertype = "IPv4" 2025-05-14 01:41:23.367296 | orchestrator | 01:41:23.366 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.367303 | orchestrator | 01:41:23.366 STDOUT terraform:  + port_range_max = 22 2025-05-14 01:41:23.367310 | orchestrator | 01:41:23.366 STDOUT terraform:  + port_range_min = 22 2025-05-14 01:41:23.367329 | orchestrator | 01:41:23.366 STDOUT terraform:  + protocol = "tcp" 2025-05-14 01:41:23.367337 | orchestrator | 01:41:23.366 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.367343 | orchestrator | 01:41:23.366 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-14 01:41:23.367350 | orchestrator | 01:41:23.366 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-14 01:41:23.367357 | orchestrator | 01:41:23.366 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-14 01:41:23.367380 | orchestrator | 01:41:23.366 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 01:41:23.367397 | orchestrator | 01:41:23.366 STDOUT terraform:  } 2025-05-14 01:41:23.367409 | orchestrator | 01:41:23.366 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-05-14 01:41:23.367420 | orchestrator | 01:41:23.366 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-05-14 01:41:23.367431 | orchestrator | 01:41:23.366 STDOUT terraform:  + description = "wireguard" 2025-05-14 01:41:23.367443 | orchestrator | 01:41:23.366 STDOUT terraform:  + direction = "ingress" 2025-05-14 01:41:23.367450 | orchestrator | 01:41:23.366 STDOUT terraform:  + ethertype = "IPv4" 2025-05-14 01:41:23.367457 | orchestrator | 01:41:23.366 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.367464 | orchestrator | 01:41:23.366 STDOUT terraform:  + port_range_max = 51820 2025-05-14 01:41:23.367470 | orchestrator | 01:41:23.366 STDOUT terraform:  + port_range_min = 51820 2025-05-14 01:41:23.367477 | orchestrator | 01:41:23.366 STDOUT terraform:  + protocol = "udp" 2025-05-14 01:41:23.367484 | orchestrator | 01:41:23.366 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.367495 | orchestrator | 01:41:23.366 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-14 01:41:23.367504 | orchestrator | 01:41:23.366 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-14 01:41:23.367511 | orchestrator | 01:41:23.367 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-14 01:41:23.367518 | orchestrator | 01:41:23.367 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 01:41:23.367525 | orchestrator | 01:41:23.367 STDOUT terraform:  } 2025-05-14 01:41:23.367532 | orchestrator | 01:41:23.367 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-05-14 01:41:23.367538 | orchestrator | 01:41:23.367 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-05-14 01:41:23.367545 | orchestrator | 01:41:23.367 STDOUT terraform:  + direction = "ingress" 2025-05-14 01:41:23.367552 | orchestrator | 01:41:23.367 STDOUT terraform:  + ethertype = "IPv4" 2025-05-14 01:41:23.367559 | orchestrator | 01:41:23.367 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.367565 | orchestrator | 01:41:23.367 STDOUT terraform:  + protocol = "tcp" 2025-05-14 01:41:23.367572 | orchestrator | 01:41:23.367 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.367587 | orchestrator | 01:41:23.367 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-14 01:41:23.367593 | orchestrator | 01:41:23.367 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-14 01:41:23.367604 | orchestrator | 01:41:23.367 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-14 01:41:23.367611 | orchestrator | 01:41:23.367 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 01:41:23.367617 | orchestrator | 01:41:23.367 STDOUT terraform:  } 2025-05-14 01:41:23.367624 | orchestrator | 01:41:23.367 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-05-14 01:41:23.367631 | orchestrator | 01:41:23.367 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-05-14 01:41:23.367638 | orchestrator | 01:41:23.367 STDOUT terraform:  + direction = "ingress" 2025-05-14 01:41:23.367645 | orchestrator | 01:41:23.367 STDOUT terraform:  + ethertype = "IPv4" 2025-05-14 01:41:23.367653 | orchestrator | 01:41:23.367 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.367667 | orchestrator | 01:41:23.367 STDOUT terraform:  + protocol = "udp" 2025-05-14 01:41:23.367678 | orchestrator | 01:41:23.367 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.367686 | orchestrator | 01:41:23.367 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-14 01:41:23.367698 | orchestrator | 01:41:23.367 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-14 01:41:23.367751 | orchestrator | 01:41:23.367 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-14 01:41:23.367761 | orchestrator | 01:41:23.367 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 01:41:23.367771 | orchestrator | 01:41:23.367 STDOUT terraform:  } 2025-05-14 01:41:23.367823 | orchestrator | 01:41:23.367 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-05-14 01:41:23.367877 | orchestrator | 01:41:23.367 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-05-14 01:41:23.367889 | orchestrator | 01:41:23.367 STDOUT terraform:  + direction = "ingress" 2025-05-14 01:41:23.367899 | orchestrator | 01:41:23.367 STDOUT terraform:  + ethertype = "IPv4" 2025-05-14 01:41:23.367945 | orchestrator | 01:41:23.367 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.367957 | orchestrator | 01:41:23.367 STDOUT terraform:  + protocol = "icmp" 2025-05-14 01:41:23.367994 | orchestrator | 01:41:23.367 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.368006 | orchestrator | 01:41:23.367 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-14 01:41:23.368049 | orchestrator | 01:41:23.367 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-14 01:41:23.368061 | orchestrator | 01:41:23.368 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-14 01:41:23.368094 | orchestrator | 01:41:23.368 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 01:41:23.368103 | orchestrator | 01:41:23.368 STDOUT terraform:  } 2025-05-14 01:41:23.368155 | orchestrator | 01:41:23.368 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-05-14 01:41:23.368207 | orchestrator | 01:41:23.368 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-05-14 01:41:23.368220 | orchestrator | 01:41:23.368 STDOUT terraform:  + direction = "ingress" 2025-05-14 01:41:23.368230 | orchestrator | 01:41:23.368 STDOUT terraform:  + ethertype = "IPv4" 2025-05-14 01:41:23.368275 | orchestrator | 01:41:23.368 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.368287 | orchestrator | 01:41:23.368 STDOUT terraform:  + protocol = "tcp" 2025-05-14 01:41:23.368320 | orchestrator | 01:41:23.368 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.368331 | orchestrator | 01:41:23.368 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-14 01:41:23.368409 | orchestrator | 01:41:23.368 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-14 01:41:23.368424 | orchestrator | 01:41:23.368 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-14 01:41:23.368433 | orchestrator | 01:41:23.368 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 01:41:23.368443 | orchestrator | 01:41:23.368 STDOUT terraform:  } 2025-05-14 01:41:23.368503 | orchestrator | 01:41:23.368 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-05-14 01:41:23.368562 | orchestrator | 01:41:23.368 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-05-14 01:41:23.368574 | orchestrator | 01:41:23.368 STDOUT terraform:  + direction = "ingress" 2025-05-14 01:41:23.368585 | orchestrator | 01:41:23.368 STDOUT terraform:  + ethertype = "IPv4" 2025-05-14 01:41:23.368631 | orchestrator | 01:41:23.368 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.368643 | orchestrator | 01:41:23.368 STDOUT terraform:  + protocol = "udp" 2025-05-14 01:41:23.368676 | orchestrator | 01:41:23.368 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.368687 | orchestrator | 01:41:23.368 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-14 01:41:23.368721 | orchestrator | 01:41:23.368 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-14 01:41:23.368732 | orchestrator | 01:41:23.368 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-14 01:41:23.368780 | orchestrator | 01:41:23.368 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 01:41:23.368790 | orchestrator | 01:41:23.368 STDOUT terraform:  } 2025-05-14 01:41:23.368842 | orchestrator | 01:41:23.368 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-05-14 01:41:23.368894 | orchestrator | 01:41:23.368 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-05-14 01:41:23.368907 | orchestrator | 01:41:23.368 STDOUT terraform:  + direction = "ingress" 2025-05-14 01:41:23.368918 | orchestrator | 01:41:23.368 STDOUT terraform:  + ethertype = "IPv4" 2025-05-14 01:41:23.368963 | orchestrator | 01:41:23.368 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.368975 | orchestrator | 01:41:23.368 STDOUT terraform:  + protocol = "icmp" 2025-05-14 01:41:23.369030 | orchestrator | 01:41:23.368 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.369048 | orchestrator | 01:41:23.369 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-14 01:41:23.369090 | orchestrator | 01:41:23.369 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-14 01:41:23.369102 | orchestrator | 01:41:23.369 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-14 01:41:23.369134 | orchestrator | 01:41:23.369 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 01:41:23.369144 | orchestrator | 01:41:23.369 STDOUT terraform:  } 2025-05-14 01:41:23.369205 | orchestrator | 01:41:23.369 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-05-14 01:41:23.369255 | orchestrator | 01:41:23.369 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-05-14 01:41:23.369267 | orchestrator | 01:41:23.369 STDOUT terraform:  + description = "vrrp" 2025-05-14 01:41:23.369278 | orchestrator | 01:41:23.369 STDOUT terraform:  + direction = "ingress" 2025-05-14 01:41:23.369311 | orchestrator | 01:41:23.369 STDOUT terraform:  + ethertype = "IPv4" 2025-05-14 01:41:23.369344 | orchestrator | 01:41:23.369 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.369356 | orchestrator | 01:41:23.369 STDOUT terraform:  + protocol = "112" 2025-05-14 01:41:23.369434 | orchestrator | 01:41:23.369 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.369470 | orchestrator | 01:41:23.369 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-14 01:41:23.369503 | orchestrator | 01:41:23.369 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-14 01:41:23.369545 | orchestrator | 01:41:23.369 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-14 01:41:23.369593 | orchestrator | 01:41:23.369 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 01:41:23.369605 | orchestrator | 01:41:23.369 STDOUT terraform:  } 2025-05-14 01:41:23.369677 | orchestrator | 01:41:23.369 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-05-14 01:41:23.369736 | orchestrator | 01:41:23.369 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-05-14 01:41:23.369770 | orchestrator | 01:41:23.369 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 01:41:23.369803 | orchestrator | 01:41:23.369 STDOUT terraform:  + description = "management security group" 2025-05-14 01:41:23.369845 | orchestrator | 01:41:23.369 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.369857 | orchestrator | 01:41:23.369 STDOUT terraform:  + name = "testbed-management" 2025-05-14 01:41:23.369889 | orchestrator | 01:41:23.369 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.369901 | orchestrator | 01:41:23.369 STDOUT terraform:  + stateful = (known after apply) 2025-05-14 01:41:23.369945 | orchestrator | 01:41:23.369 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 01:41:23.369955 | orchestrator | 01:41:23.369 STDOUT terraform:  } 2025-05-14 01:41:23.369997 | orchestrator | 01:41:23.369 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-05-14 01:41:23.370069 | orchestrator | 01:41:23.369 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-05-14 01:41:23.370082 | orchestrator | 01:41:23.370 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 01:41:23.370115 | orchestrator | 01:41:23.370 STDOUT terraform:  + description = "node security group" 2025-05-14 01:41:23.370144 | orchestrator | 01:41:23.370 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.370171 | orchestrator | 01:41:23.370 STDOUT terraform:  + name = "testbed-node" 2025-05-14 01:41:23.370199 | orchestrator | 01:41:23.370 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.370231 | orchestrator | 01:41:23.370 STDOUT terraform:  + stateful = (known after apply) 2025-05-14 01:41:23.370260 | orchestrator | 01:41:23.370 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 01:41:23.370272 | orchestrator | 01:41:23.370 STDOUT terraform:  } 2025-05-14 01:41:23.370330 | orchestrator | 01:41:23.370 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-05-14 01:41:23.370395 | orchestrator | 01:41:23.370 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-05-14 01:41:23.370409 | orchestrator | 01:41:23.370 STDOUT terraform:  + all_tags = (known after apply) 2025-05-14 01:41:23.370578 | orchestrator | 01:41:23.370 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-05-14 01:41:23.370694 | orchestrator | 01:41:23.370 STDOUT terraform:  + dns_nameservers = [ 2025-05-14 01:41:23.370709 | orchestrator | 01:41:23.370 STDOUT terraform:  + "8.8.8.8", 2025-05-14 01:41:23.370720 | orchestrator | 01:41:23.370 STDOUT terraform:  + "9.9.9.9", 2025-05-14 01:41:23.370731 | orchestrator | 01:41:23.370 STDOUT terraform:  ] 2025-05-14 01:41:23.370742 | orchestrator | 01:41:23.370 STDOUT terraform:  + enable_dhcp = true 2025-05-14 01:41:23.370763 | orchestrator | 01:41:23.370 STDOUT terraform:  + gateway_ip = (known after apply) 2025-05-14 01:41:23.370775 | orchestrator | 01:41:23.370 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.370786 | orchestrator | 01:41:23.370 STDOUT terraform:  + ip_version = 4 2025-05-14 01:41:23.370797 | orchestrator | 01:41:23.370 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-05-14 01:41:23.370808 | orchestrator | 01:41:23.370 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-05-14 01:41:23.370823 | orchestrator | 01:41:23.370 STDOUT terraform:  + name = "subnet-testbed-management" 2025-05-14 01:41:23.370842 | orchestrator | 01:41:23.370 STDOUT terraform:  + network_id = (known after apply) 2025-05-14 01:41:23.370857 | orchestrator | 01:41:23.370 STDOUT terraform:  + no_gateway = false 2025-05-14 01:41:23.370868 | orchestrator | 01:41:23.370 STDOUT terraform:  + region = (known after apply) 2025-05-14 01:41:23.370893 | orchestrator | 01:41:23.370 STDOUT terraform:  + service_types = (known after apply) 2025-05-14 01:41:23.370907 | orchestrator | 01:41:23.370 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-14 01:41:23.370939 | orchestrator | 01:41:23.370 STDOUT terraform:  + allocation_pool { 2025-05-14 01:41:23.370956 | orchestrator | 01:41:23.370 STDOUT terraform:  + end = "192.168.31.250" 2025-05-14 01:41:23.370967 | orchestrator | 01:41:23.370 STDOUT terraform:  + start = "192.168.31.200" 2025-05-14 01:41:23.370978 | orchestrator | 01:41:23.370 STDOUT terraform:  } 2025-05-14 01:41:23.370992 | orchestrator | 01:41:23.370 STDOUT terraform:  } 2025-05-14 01:41:23.371003 | orchestrator | 01:41:23.370 STDOUT terraform:  # terraform_data.image will be created 2025-05-14 01:41:23.371017 | orchestrator | 01:41:23.370 STDOUT terraform:  + resource "terraform_data" "image" { 2025-05-14 01:41:23.371049 | orchestrator | 01:41:23.371 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.371064 | orchestrator | 01:41:23.371 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-14 01:41:23.371104 | orchestrator | 01:41:23.371 STDOUT terraform:  + output = (known after apply) 2025-05-14 01:41:23.371117 | orchestrator | 01:41:23.371 STDOUT terraform:  } 2025-05-14 01:41:23.371131 | orchestrator | 01:41:23.371 STDOUT terraform:  # terraform_data.image_node will be created 2025-05-14 01:41:23.371155 | orchestrator | 01:41:23.371 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-05-14 01:41:23.371172 | orchestrator | 01:41:23.371 STDOUT terraform:  + id = (known after apply) 2025-05-14 01:41:23.371209 | orchestrator | 01:41:23.371 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-14 01:41:23.371245 | orchestrator | 01:41:23.371 STDOUT terraform:  + output = (known after apply) 2025-05-14 01:41:23.371351 | orchestrator | 01:41:23.371 STDOUT terraform:  } 2025-05-14 01:41:23.371434 | orchestrator | 01:41:23.371 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-05-14 01:41:23.371451 | orchestrator | 01:41:23.371 STDOUT terraform: Changes to Outputs: 2025-05-14 01:41:23.371487 | orchestrator | 01:41:23.371 STDOUT terraform:  + manager_address = (sensitive value) 2025-05-14 01:41:23.371525 | orchestrator | 01:41:23.371 STDOUT terraform:  + private_key = (sensitive value) 2025-05-14 01:41:23.593564 | orchestrator | 01:41:23.593 STDOUT terraform: terraform_data.image: Creating... 2025-05-14 01:41:23.593656 | orchestrator | 01:41:23.593 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=2d5e7459-330d-f9ab-d0ed-066040dd7776] 2025-05-14 01:41:23.594386 | orchestrator | 01:41:23.593 STDOUT terraform: terraform_data.image_node: Creating... 2025-05-14 01:41:23.595095 | orchestrator | 01:41:23.594 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=e1f0a2f8-2b80-10dd-3786-134ad5853665] 2025-05-14 01:41:23.614648 | orchestrator | 01:41:23.614 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-05-14 01:41:23.616050 | orchestrator | 01:41:23.615 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-05-14 01:41:23.617001 | orchestrator | 01:41:23.616 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-05-14 01:41:23.621506 | orchestrator | 01:41:23.621 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-05-14 01:41:23.621980 | orchestrator | 01:41:23.621 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-05-14 01:41:23.622255 | orchestrator | 01:41:23.622 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-05-14 01:41:23.623187 | orchestrator | 01:41:23.623 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-05-14 01:41:23.623381 | orchestrator | 01:41:23.623 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-05-14 01:41:23.623577 | orchestrator | 01:41:23.623 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-05-14 01:41:23.629128 | orchestrator | 01:41:23.628 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-05-14 01:41:24.096938 | orchestrator | 01:41:24.096 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-14 01:41:24.106829 | orchestrator | 01:41:24.106 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-14 01:41:24.111118 | orchestrator | 01:41:24.110 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-05-14 01:41:24.115477 | orchestrator | 01:41:24.115 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-05-14 01:41:24.906901 | orchestrator | 01:41:24.906 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-05-14 01:41:24.919009 | orchestrator | 01:41:24.918 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-05-14 01:41:30.227744 | orchestrator | 01:41:30.227 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=1c769e96-f397-499a-973e-4e9f0c507b0d] 2025-05-14 01:41:30.240808 | orchestrator | 01:41:30.240 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-05-14 01:41:33.623561 | orchestrator | 01:41:33.623 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-05-14 01:41:33.623676 | orchestrator | 01:41:33.623 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-05-14 01:41:33.624509 | orchestrator | 01:41:33.624 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-05-14 01:41:33.624709 | orchestrator | 01:41:33.624 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-05-14 01:41:33.624920 | orchestrator | 01:41:33.624 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-05-14 01:41:33.625022 | orchestrator | 01:41:33.624 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-05-14 01:41:34.111693 | orchestrator | 01:41:34.111 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-05-14 01:41:34.116999 | orchestrator | 01:41:34.116 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-05-14 01:41:34.195129 | orchestrator | 01:41:34.194 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 10s [id=5f54ee85-b545-45a6-a856-bcb5a8b0ac61] 2025-05-14 01:41:34.206284 | orchestrator | 01:41:34.205 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 10s [id=dfedfdfd-f02f-46ee-b152-0d1db465af93] 2025-05-14 01:41:34.208493 | orchestrator | 01:41:34.207 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-05-14 01:41:34.219968 | orchestrator | 01:41:34.219 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-05-14 01:41:34.226832 | orchestrator | 01:41:34.226 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=745e2db00b6eb59af972479ded060d66d50947da] 2025-05-14 01:41:34.234999 | orchestrator | 01:41:34.234 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-05-14 01:41:34.254434 | orchestrator | 01:41:34.253 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 10s [id=1098e660-21c4-40f1-8a57-5405cc8713a2] 2025-05-14 01:41:34.257903 | orchestrator | 01:41:34.257 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 10s [id=7ac274fd-1a92-402b-b855-ca6b0ab20cf2] 2025-05-14 01:41:34.260488 | orchestrator | 01:41:34.260 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 10s [id=41d88fd2-4f90-4be6-b9c2-0d02d8e1d9f7] 2025-05-14 01:41:34.263818 | orchestrator | 01:41:34.263 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-05-14 01:41:34.264602 | orchestrator | 01:41:34.264 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-05-14 01:41:34.268783 | orchestrator | 01:41:34.268 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-05-14 01:41:34.273658 | orchestrator | 01:41:34.273 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=c5d9f68ac328447cbe1c2d6cd6812c2f17347707] 2025-05-14 01:41:34.281095 | orchestrator | 01:41:34.280 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 10s [id=0315b34d-7399-4bf5-aad0-c6c82dbe1c9e] 2025-05-14 01:41:34.282001 | orchestrator | 01:41:34.281 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-05-14 01:41:34.286116 | orchestrator | 01:41:34.285 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-05-14 01:41:34.315382 | orchestrator | 01:41:34.315 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 10s [id=1d2bee4e-0e3b-437e-a6d5-c0ab15229884] 2025-05-14 01:41:34.323178 | orchestrator | 01:41:34.322 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-05-14 01:41:34.331562 | orchestrator | 01:41:34.331 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 10s [id=b728a659-cffd-44e0-b567-754457aa92dd] 2025-05-14 01:41:34.920037 | orchestrator | 01:41:34.919 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-05-14 01:41:35.109836 | orchestrator | 01:41:35.109 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 10s [id=37cfb3af-bf99-4b3f-874b-d71467a37a95] 2025-05-14 01:41:40.241961 | orchestrator | 01:41:40.241 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-05-14 01:41:40.568045 | orchestrator | 01:41:40.567 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 11s [id=9a543e80-bf40-43d4-b372-7aff579ec7b0] 2025-05-14 01:41:41.167724 | orchestrator | 01:41:41.167 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 7s [id=282412f6-05f5-4bca-878e-b6355b692106] 2025-05-14 01:41:41.176045 | orchestrator | 01:41:41.175 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-05-14 01:41:44.209130 | orchestrator | 01:41:44.208 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-05-14 01:41:44.237710 | orchestrator | 01:41:44.237 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-05-14 01:41:44.265078 | orchestrator | 01:41:44.264 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-05-14 01:41:44.271356 | orchestrator | 01:41:44.271 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-05-14 01:41:44.282783 | orchestrator | 01:41:44.282 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-05-14 01:41:44.287360 | orchestrator | 01:41:44.287 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-05-14 01:41:44.549274 | orchestrator | 01:41:44.548 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 11s [id=d6958c45-3c69-4688-be65-10947b181749] 2025-05-14 01:41:44.572169 | orchestrator | 01:41:44.571 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 11s [id=f4cd3396-0a08-4c9a-a600-88a027dd3314] 2025-05-14 01:41:44.619189 | orchestrator | 01:41:44.618 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 11s [id=e2db2252-8503-4549-bea5-ecd40c91a84d] 2025-05-14 01:41:44.641689 | orchestrator | 01:41:44.641 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 11s [id=fb55ef0b-86dc-4261-8469-da65bd85098d] 2025-05-14 01:41:44.656748 | orchestrator | 01:41:44.656 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 11s [id=7ab8a1a8-ea98-4f35-a272-b18ca435ba8e] 2025-05-14 01:41:44.676518 | orchestrator | 01:41:44.676 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 11s [id=74bbaa4f-bef6-4c72-86a9-a51ae58fe1ae] 2025-05-14 01:41:48.427684 | orchestrator | 01:41:48.427 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 7s [id=e8a064db-d76d-43ba-bfe3-e01969773694] 2025-05-14 01:41:48.436903 | orchestrator | 01:41:48.436 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-05-14 01:41:48.437011 | orchestrator | 01:41:48.436 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-05-14 01:41:48.442515 | orchestrator | 01:41:48.442 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-05-14 01:41:48.544855 | orchestrator | 01:41:48.544 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=b4772aca-7f2d-42f9-b63d-1e7901345530] 2025-05-14 01:41:48.556719 | orchestrator | 01:41:48.556 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-05-14 01:41:48.556904 | orchestrator | 01:41:48.556 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-05-14 01:41:48.562214 | orchestrator | 01:41:48.561 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-05-14 01:41:48.562971 | orchestrator | 01:41:48.562 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-05-14 01:41:48.567197 | orchestrator | 01:41:48.567 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-05-14 01:41:48.568010 | orchestrator | 01:41:48.567 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=b2c286fd-74ea-47c8-af17-2a56386f0d4e] 2025-05-14 01:41:48.579186 | orchestrator | 01:41:48.579 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-05-14 01:41:48.579344 | orchestrator | 01:41:48.579 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-05-14 01:41:48.579657 | orchestrator | 01:41:48.579 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-05-14 01:41:48.591776 | orchestrator | 01:41:48.591 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-05-14 01:41:48.983249 | orchestrator | 01:41:48.982 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=93a93a15-64b5-4544-9283-1a3ade499d57] 2025-05-14 01:41:48.999936 | orchestrator | 01:41:48.999 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-05-14 01:41:49.142908 | orchestrator | 01:41:49.142 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=56204f55-2abf-4168-b0da-2792548348c8] 2025-05-14 01:41:49.151593 | orchestrator | 01:41:49.151 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-05-14 01:41:49.309387 | orchestrator | 01:41:49.308 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=59745271-0980-4c95-a5b3-f17c8c74ccbe] 2025-05-14 01:41:49.316189 | orchestrator | 01:41:49.315 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-05-14 01:41:49.391149 | orchestrator | 01:41:49.390 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=6cf858d8-499d-4622-9b06-76e375d2cc05] 2025-05-14 01:41:49.399293 | orchestrator | 01:41:49.399 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-05-14 01:41:49.464724 | orchestrator | 01:41:49.464 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=0806951a-7660-4d70-87da-74e42ac3b48e] 2025-05-14 01:41:49.474039 | orchestrator | 01:41:49.473 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-05-14 01:41:49.588099 | orchestrator | 01:41:49.587 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=793668f3-59e6-4192-b242-c3a9c8d3235f] 2025-05-14 01:41:49.594944 | orchestrator | 01:41:49.594 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=7aeb3765-8a2c-4cf3-b297-01ded9abca62] 2025-05-14 01:41:49.596742 | orchestrator | 01:41:49.596 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-05-14 01:41:49.611101 | orchestrator | 01:41:49.610 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-05-14 01:41:49.697487 | orchestrator | 01:41:49.697 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=75aa08f6-cdee-488e-9d4e-12e76fbd9889] 2025-05-14 01:41:49.807551 | orchestrator | 01:41:49.807 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=a2f5764c-45eb-482a-a3d6-f6243d1bcb9a] 2025-05-14 01:41:54.124221 | orchestrator | 01:41:54.123 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 5s [id=91e2f2b9-9625-4ca2-8e22-fd0d2d5fcc9c] 2025-05-14 01:41:54.277107 | orchestrator | 01:41:54.276 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 5s [id=4408801d-4256-4117-a3b5-7a5adcffbffc] 2025-05-14 01:41:54.342642 | orchestrator | 01:41:54.342 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 5s [id=611b4bcc-2637-4040-bbb8-f9c529fcf472] 2025-05-14 01:41:54.445652 | orchestrator | 01:41:54.445 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 5s [id=60f3b54f-a32e-4492-a75a-1914962b8131] 2025-05-14 01:41:54.584851 | orchestrator | 01:41:54.584 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 6s [id=77f43765-e02c-4de1-a62d-e4bfaaaa8a1f] 2025-05-14 01:41:54.709546 | orchestrator | 01:41:54.709 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=f7983bd3-f70e-4641-aaab-9ff15f987030] 2025-05-14 01:41:55.372946 | orchestrator | 01:41:55.372 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 5s [id=1f344957-c427-4527-9efc-9e5ce672ef0f] 2025-05-14 01:41:55.523783 | orchestrator | 01:41:55.523 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 8s [id=bdf980b5-430f-4ae2-b21b-46f542e96608] 2025-05-14 01:41:55.535255 | orchestrator | 01:41:55.535 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-05-14 01:41:55.553267 | orchestrator | 01:41:55.553 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-05-14 01:41:55.558566 | orchestrator | 01:41:55.558 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-05-14 01:41:55.566818 | orchestrator | 01:41:55.566 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-05-14 01:41:55.568326 | orchestrator | 01:41:55.568 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-05-14 01:41:55.571978 | orchestrator | 01:41:55.571 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-05-14 01:41:55.572591 | orchestrator | 01:41:55.572 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-05-14 01:42:01.859915 | orchestrator | 01:42:01.859 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 6s [id=08bc8ebb-02a4-4781-9e23-e1a6a7bbe6c2] 2025-05-14 01:42:01.875407 | orchestrator | 01:42:01.873 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-05-14 01:42:01.875505 | orchestrator | 01:42:01.873 STDOUT terraform: local_file.inventory: Creating... 2025-05-14 01:42:01.883472 | orchestrator | 01:42:01.883 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=e983a4bb872b2a09f88efb1e0a816acebfca76f2] 2025-05-14 01:42:01.888922 | orchestrator | 01:42:01.888 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-05-14 01:42:01.894224 | orchestrator | 01:42:01.894 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=b3091e92a53608c17c67951693a87cc5afb031ea] 2025-05-14 01:42:02.379343 | orchestrator | 01:42:02.378 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 0s [id=08bc8ebb-02a4-4781-9e23-e1a6a7bbe6c2] 2025-05-14 01:42:05.558517 | orchestrator | 01:42:05.558 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-05-14 01:42:05.561649 | orchestrator | 01:42:05.561 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-05-14 01:42:05.568971 | orchestrator | 01:42:05.568 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-05-14 01:42:05.569030 | orchestrator | 01:42:05.568 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-05-14 01:42:05.573241 | orchestrator | 01:42:05.573 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-05-14 01:42:05.576618 | orchestrator | 01:42:05.576 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-05-14 01:42:15.558747 | orchestrator | 01:42:15.558 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-05-14 01:42:15.561802 | orchestrator | 01:42:15.561 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-05-14 01:42:15.569788 | orchestrator | 01:42:15.569 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-05-14 01:42:15.569893 | orchestrator | 01:42:15.569 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-05-14 01:42:15.574008 | orchestrator | 01:42:15.573 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-05-14 01:42:15.577482 | orchestrator | 01:42:15.577 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-05-14 01:42:15.954192 | orchestrator | 01:42:15.953 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 20s [id=fc799e14-b66f-4756-9358-73d2733a8717] 2025-05-14 01:42:15.966646 | orchestrator | 01:42:15.966 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 20s [id=78463ff4-cd56-44aa-8855-55eb8386a00a] 2025-05-14 01:42:16.015552 | orchestrator | 01:42:16.015 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 20s [id=7805cff8-b3f7-4d0a-9d06-563856e25a29] 2025-05-14 01:42:25.563042 | orchestrator | 01:42:25.562 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-05-14 01:42:25.569803 | orchestrator | 01:42:25.569 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-05-14 01:42:25.578099 | orchestrator | 01:42:25.577 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-05-14 01:42:26.083118 | orchestrator | 01:42:26.082 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 30s [id=94479752-b5a1-40da-bc5f-93485f1be576] 2025-05-14 01:42:26.111521 | orchestrator | 01:42:26.111 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 30s [id=b079d8d5-ffc9-4cfb-955e-f7c3c7774127] 2025-05-14 01:42:26.237423 | orchestrator | 01:42:26.237 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 30s [id=b719fc79-dfb3-45dd-8146-d9da884a9ceb] 2025-05-14 01:42:26.263377 | orchestrator | 01:42:26.262 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-05-14 01:42:26.265755 | orchestrator | 01:42:26.264 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-05-14 01:42:26.270347 | orchestrator | 01:42:26.270 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=855296054860366117] 2025-05-14 01:42:26.279059 | orchestrator | 01:42:26.277 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-05-14 01:42:26.279113 | orchestrator | 01:42:26.277 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-05-14 01:42:26.279119 | orchestrator | 01:42:26.277 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-05-14 01:42:26.282078 | orchestrator | 01:42:26.279 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-05-14 01:42:26.292725 | orchestrator | 01:42:26.292 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-05-14 01:42:26.306805 | orchestrator | 01:42:26.306 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-05-14 01:42:26.321177 | orchestrator | 01:42:26.320 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-05-14 01:42:26.324887 | orchestrator | 01:42:26.324 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-05-14 01:42:26.354273 | orchestrator | 01:42:26.354 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-05-14 01:42:31.604967 | orchestrator | 01:42:31.604 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 6s [id=b079d8d5-ffc9-4cfb-955e-f7c3c7774127/0315b34d-7399-4bf5-aad0-c6c82dbe1c9e] 2025-05-14 01:42:31.629939 | orchestrator | 01:42:31.629 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 6s [id=fc799e14-b66f-4756-9358-73d2733a8717/1d2bee4e-0e3b-437e-a6d5-c0ab15229884] 2025-05-14 01:42:31.640578 | orchestrator | 01:42:31.640 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 6s [id=78463ff4-cd56-44aa-8855-55eb8386a00a/41d88fd2-4f90-4be6-b9c2-0d02d8e1d9f7] 2025-05-14 01:42:31.658655 | orchestrator | 01:42:31.658 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 6s [id=b079d8d5-ffc9-4cfb-955e-f7c3c7774127/b728a659-cffd-44e0-b567-754457aa92dd] 2025-05-14 01:42:31.683431 | orchestrator | 01:42:31.682 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 6s [id=b079d8d5-ffc9-4cfb-955e-f7c3c7774127/dfedfdfd-f02f-46ee-b152-0d1db465af93] 2025-05-14 01:42:31.700038 | orchestrator | 01:42:31.699 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 6s [id=78463ff4-cd56-44aa-8855-55eb8386a00a/37cfb3af-bf99-4b3f-874b-d71467a37a95] 2025-05-14 01:42:31.714001 | orchestrator | 01:42:31.713 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 6s [id=fc799e14-b66f-4756-9358-73d2733a8717/7ac274fd-1a92-402b-b855-ca6b0ab20cf2] 2025-05-14 01:42:31.714564 | orchestrator | 01:42:31.714 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 6s [id=fc799e14-b66f-4756-9358-73d2733a8717/5f54ee85-b545-45a6-a856-bcb5a8b0ac61] 2025-05-14 01:42:31.724897 | orchestrator | 01:42:31.724 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 6s [id=78463ff4-cd56-44aa-8855-55eb8386a00a/1098e660-21c4-40f1-8a57-5405cc8713a2] 2025-05-14 01:42:36.351440 | orchestrator | 01:42:36.351 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-05-14 01:42:46.352320 | orchestrator | 01:42:46.351 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-05-14 01:42:46.740228 | orchestrator | 01:42:46.739 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=d6d84198-0202-461a-b66d-79b01e78bc44] 2025-05-14 01:42:46.768579 | orchestrator | 01:42:46.768 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-05-14 01:42:46.768666 | orchestrator | 01:42:46.768 STDOUT terraform: Outputs: 2025-05-14 01:42:46.768686 | orchestrator | 01:42:46.768 STDOUT terraform: manager_address = 2025-05-14 01:42:46.768714 | orchestrator | 01:42:46.768 STDOUT terraform: private_key = 2025-05-14 01:42:47.245794 | orchestrator | ok: Runtime: 0:01:33.675991 2025-05-14 01:42:47.272883 | 2025-05-14 01:42:47.273006 | TASK [Fetch manager address] 2025-05-14 01:42:47.727713 | orchestrator | ok 2025-05-14 01:42:47.738046 | 2025-05-14 01:42:47.738267 | TASK [Set manager_host address] 2025-05-14 01:42:47.830858 | orchestrator | ok 2025-05-14 01:42:47.844656 | 2025-05-14 01:42:47.844859 | LOOP [Update ansible collections] 2025-05-14 01:42:48.913999 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-05-14 01:42:48.914403 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-14 01:42:48.914454 | orchestrator | Starting galaxy collection install process 2025-05-14 01:42:48.914489 | orchestrator | Process install dependency map 2025-05-14 01:42:48.914519 | orchestrator | Starting collection install process 2025-05-14 01:42:48.914548 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons' 2025-05-14 01:42:48.914583 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons 2025-05-14 01:42:48.914619 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-05-14 01:42:48.914687 | orchestrator | ok: Item: commons Runtime: 0:00:00.619323 2025-05-14 01:42:49.813445 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-05-14 01:42:49.813682 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-14 01:42:49.813736 | orchestrator | Starting galaxy collection install process 2025-05-14 01:42:49.813771 | orchestrator | Process install dependency map 2025-05-14 01:42:49.813803 | orchestrator | Starting collection install process 2025-05-14 01:42:49.813832 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services' 2025-05-14 01:42:49.813863 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services 2025-05-14 01:42:49.813890 | orchestrator | osism.services:999.0.0 was installed successfully 2025-05-14 01:42:49.813936 | orchestrator | ok: Item: services Runtime: 0:00:00.623064 2025-05-14 01:42:49.839793 | 2025-05-14 01:42:49.840015 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-14 01:43:00.397434 | orchestrator | ok 2025-05-14 01:43:00.407224 | 2025-05-14 01:43:00.407357 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-14 01:44:00.449220 | orchestrator | ok 2025-05-14 01:44:00.459807 | 2025-05-14 01:44:00.459981 | TASK [Fetch manager ssh hostkey] 2025-05-14 01:44:02.046125 | orchestrator | Output suppressed because no_log was given 2025-05-14 01:44:02.061711 | 2025-05-14 01:44:02.061891 | TASK [Get ssh keypair from terraform environment] 2025-05-14 01:44:02.598675 | orchestrator | ok: Runtime: 0:00:00.010542 2025-05-14 01:44:02.615987 | 2025-05-14 01:44:02.616172 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-14 01:44:02.656656 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-05-14 01:44:02.666696 | 2025-05-14 01:44:02.666890 | TASK [Run manager part 0] 2025-05-14 01:44:03.525330 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-14 01:44:03.564479 | orchestrator | 2025-05-14 01:44:03.564524 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-05-14 01:44:03.564531 | orchestrator | 2025-05-14 01:44:03.564544 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-05-14 01:44:05.343270 | orchestrator | ok: [testbed-manager] 2025-05-14 01:44:05.343342 | orchestrator | 2025-05-14 01:44:05.343372 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-14 01:44:05.343384 | orchestrator | 2025-05-14 01:44:05.343395 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-14 01:44:07.232145 | orchestrator | ok: [testbed-manager] 2025-05-14 01:44:07.232232 | orchestrator | 2025-05-14 01:44:07.232251 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-14 01:44:07.921705 | orchestrator | ok: [testbed-manager] 2025-05-14 01:44:07.921737 | orchestrator | 2025-05-14 01:44:07.921744 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-14 01:44:07.964183 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:44:07.964222 | orchestrator | 2025-05-14 01:44:07.964232 | orchestrator | TASK [Update package cache] **************************************************** 2025-05-14 01:44:07.986111 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:44:07.986147 | orchestrator | 2025-05-14 01:44:07.986155 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-14 01:44:08.007517 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:44:08.007554 | orchestrator | 2025-05-14 01:44:08.007560 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-14 01:44:08.036174 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:44:08.036215 | orchestrator | 2025-05-14 01:44:08.036223 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-14 01:44:08.078164 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:44:08.078201 | orchestrator | 2025-05-14 01:44:08.078211 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-05-14 01:44:08.112779 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:44:08.112859 | orchestrator | 2025-05-14 01:44:08.112885 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-05-14 01:44:08.153372 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:44:08.153423 | orchestrator | 2025-05-14 01:44:08.153432 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-05-14 01:44:09.003462 | orchestrator | changed: [testbed-manager] 2025-05-14 01:44:09.003557 | orchestrator | 2025-05-14 01:44:09.003575 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-05-14 01:47:07.387072 | orchestrator | changed: [testbed-manager] 2025-05-14 01:47:07.387259 | orchestrator | 2025-05-14 01:47:07.387281 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-14 01:48:22.982210 | orchestrator | changed: [testbed-manager] 2025-05-14 01:48:22.982309 | orchestrator | 2025-05-14 01:48:22.982326 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-14 01:48:42.965977 | orchestrator | changed: [testbed-manager] 2025-05-14 01:48:42.966125 | orchestrator | 2025-05-14 01:48:42.966148 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-14 01:48:51.300722 | orchestrator | changed: [testbed-manager] 2025-05-14 01:48:51.300811 | orchestrator | 2025-05-14 01:48:51.300826 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-14 01:48:51.346127 | orchestrator | ok: [testbed-manager] 2025-05-14 01:48:51.346200 | orchestrator | 2025-05-14 01:48:51.346213 | orchestrator | TASK [Get current user] ******************************************************** 2025-05-14 01:48:52.132426 | orchestrator | ok: [testbed-manager] 2025-05-14 01:48:52.132500 | orchestrator | 2025-05-14 01:48:52.132515 | orchestrator | TASK [Create venv directory] *************************************************** 2025-05-14 01:48:52.878004 | orchestrator | changed: [testbed-manager] 2025-05-14 01:48:52.878161 | orchestrator | 2025-05-14 01:48:52.878178 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-05-14 01:48:59.336085 | orchestrator | changed: [testbed-manager] 2025-05-14 01:48:59.336134 | orchestrator | 2025-05-14 01:48:59.336156 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-05-14 01:49:05.359184 | orchestrator | changed: [testbed-manager] 2025-05-14 01:49:05.359230 | orchestrator | 2025-05-14 01:49:05.359242 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-05-14 01:49:08.182520 | orchestrator | changed: [testbed-manager] 2025-05-14 01:49:08.182576 | orchestrator | 2025-05-14 01:49:08.182589 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-05-14 01:49:09.945466 | orchestrator | changed: [testbed-manager] 2025-05-14 01:49:09.945549 | orchestrator | 2025-05-14 01:49:09.945563 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-05-14 01:49:11.078153 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-14 01:49:11.078204 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-14 01:49:11.078213 | orchestrator | 2025-05-14 01:49:11.078222 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-05-14 01:49:11.128254 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-14 01:49:11.128459 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-14 01:49:11.128476 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-14 01:49:11.128512 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-14 01:49:15.170642 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-14 01:49:15.170729 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-14 01:49:15.170744 | orchestrator | 2025-05-14 01:49:15.170757 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-05-14 01:49:15.760860 | orchestrator | changed: [testbed-manager] 2025-05-14 01:49:15.760948 | orchestrator | 2025-05-14 01:49:15.760965 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-05-14 01:49:39.045470 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-05-14 01:49:39.045567 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-05-14 01:49:39.045584 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-05-14 01:49:39.045597 | orchestrator | 2025-05-14 01:49:39.045610 | orchestrator | TASK [Install local collections] *********************************************** 2025-05-14 01:49:41.463818 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-05-14 01:49:41.463910 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-05-14 01:49:41.463927 | orchestrator | 2025-05-14 01:49:41.463940 | orchestrator | PLAY [Create operator user] **************************************************** 2025-05-14 01:49:41.463953 | orchestrator | 2025-05-14 01:49:41.463964 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-14 01:49:42.904500 | orchestrator | ok: [testbed-manager] 2025-05-14 01:49:42.904589 | orchestrator | 2025-05-14 01:49:42.904607 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-14 01:49:42.955496 | orchestrator | ok: [testbed-manager] 2025-05-14 01:49:42.955576 | orchestrator | 2025-05-14 01:49:42.955590 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-14 01:49:43.024983 | orchestrator | ok: [testbed-manager] 2025-05-14 01:49:43.025084 | orchestrator | 2025-05-14 01:49:43.025101 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-14 01:49:43.811298 | orchestrator | changed: [testbed-manager] 2025-05-14 01:49:43.811383 | orchestrator | 2025-05-14 01:49:43.811397 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-14 01:49:44.567033 | orchestrator | changed: [testbed-manager] 2025-05-14 01:49:44.567176 | orchestrator | 2025-05-14 01:49:44.567197 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-14 01:49:45.952765 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-05-14 01:49:45.952852 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-05-14 01:49:45.952868 | orchestrator | 2025-05-14 01:49:45.952896 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-14 01:49:47.356945 | orchestrator | changed: [testbed-manager] 2025-05-14 01:49:47.356992 | orchestrator | 2025-05-14 01:49:47.357000 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-14 01:49:49.192341 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-05-14 01:49:49.193166 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-05-14 01:49:49.193195 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-05-14 01:49:49.193207 | orchestrator | 2025-05-14 01:49:49.193220 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-14 01:49:49.748981 | orchestrator | changed: [testbed-manager] 2025-05-14 01:49:49.749119 | orchestrator | 2025-05-14 01:49:49.749141 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-14 01:49:49.818102 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:49:49.818141 | orchestrator | 2025-05-14 01:49:49.818147 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-14 01:49:50.649096 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-14 01:49:50.649169 | orchestrator | changed: [testbed-manager] 2025-05-14 01:49:50.649183 | orchestrator | 2025-05-14 01:49:50.649194 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-14 01:49:50.684967 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:49:50.685037 | orchestrator | 2025-05-14 01:49:50.685050 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-14 01:49:50.722269 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:49:50.722337 | orchestrator | 2025-05-14 01:49:50.722354 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-14 01:49:50.763668 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:49:50.763758 | orchestrator | 2025-05-14 01:49:50.763775 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-14 01:49:50.823353 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:49:50.823400 | orchestrator | 2025-05-14 01:49:50.823407 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-14 01:49:51.544843 | orchestrator | ok: [testbed-manager] 2025-05-14 01:49:51.544909 | orchestrator | 2025-05-14 01:49:51.544925 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-14 01:49:51.544937 | orchestrator | 2025-05-14 01:49:51.544951 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-14 01:49:52.866295 | orchestrator | ok: [testbed-manager] 2025-05-14 01:49:52.866362 | orchestrator | 2025-05-14 01:49:52.866378 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-05-14 01:49:53.833945 | orchestrator | changed: [testbed-manager] 2025-05-14 01:49:53.834098 | orchestrator | 2025-05-14 01:49:53.834130 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 01:49:53.834153 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-05-14 01:49:53.834174 | orchestrator | 2025-05-14 01:49:54.409046 | orchestrator | ok: Runtime: 0:05:50.961056 2025-05-14 01:49:54.426789 | 2025-05-14 01:49:54.426966 | TASK [Point out that the log in on the manager is now possible] 2025-05-14 01:49:54.464877 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-05-14 01:49:54.475527 | 2025-05-14 01:49:54.475667 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-14 01:49:54.516144 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-05-14 01:49:54.525760 | 2025-05-14 01:49:54.525953 | TASK [Run manager part 1 + 2] 2025-05-14 01:49:55.392657 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-14 01:49:55.458729 | orchestrator | 2025-05-14 01:49:55.458820 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-05-14 01:49:55.458838 | orchestrator | 2025-05-14 01:49:55.458866 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-14 01:49:58.039181 | orchestrator | ok: [testbed-manager] 2025-05-14 01:49:58.039365 | orchestrator | 2025-05-14 01:49:58.039421 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-14 01:49:58.075445 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:49:58.075499 | orchestrator | 2025-05-14 01:49:58.075508 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-14 01:49:58.120424 | orchestrator | ok: [testbed-manager] 2025-05-14 01:49:58.120482 | orchestrator | 2025-05-14 01:49:58.120491 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-14 01:49:58.163122 | orchestrator | ok: [testbed-manager] 2025-05-14 01:49:58.163180 | orchestrator | 2025-05-14 01:49:58.163189 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-14 01:49:58.234462 | orchestrator | ok: [testbed-manager] 2025-05-14 01:49:58.234525 | orchestrator | 2025-05-14 01:49:58.234533 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-14 01:49:58.296750 | orchestrator | ok: [testbed-manager] 2025-05-14 01:49:58.296804 | orchestrator | 2025-05-14 01:49:58.296812 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-14 01:49:58.343283 | orchestrator | included: /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-05-14 01:49:58.343376 | orchestrator | 2025-05-14 01:49:58.343392 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-14 01:49:59.099366 | orchestrator | ok: [testbed-manager] 2025-05-14 01:49:59.099460 | orchestrator | 2025-05-14 01:49:59.099478 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-14 01:49:59.152977 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:49:59.153102 | orchestrator | 2025-05-14 01:49:59.153120 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-14 01:50:00.567939 | orchestrator | changed: [testbed-manager] 2025-05-14 01:50:00.568014 | orchestrator | 2025-05-14 01:50:00.568030 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-14 01:50:01.174731 | orchestrator | ok: [testbed-manager] 2025-05-14 01:50:01.174833 | orchestrator | 2025-05-14 01:50:01.174850 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-14 01:50:02.359167 | orchestrator | changed: [testbed-manager] 2025-05-14 01:50:02.359251 | orchestrator | 2025-05-14 01:50:02.359268 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-14 01:50:15.579934 | orchestrator | changed: [testbed-manager] 2025-05-14 01:50:15.580072 | orchestrator | 2025-05-14 01:50:15.580089 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-14 01:50:16.258270 | orchestrator | ok: [testbed-manager] 2025-05-14 01:50:16.258520 | orchestrator | 2025-05-14 01:50:16.258547 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-14 01:50:16.315796 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:50:16.315882 | orchestrator | 2025-05-14 01:50:16.315898 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-05-14 01:50:17.298373 | orchestrator | changed: [testbed-manager] 2025-05-14 01:50:17.298493 | orchestrator | 2025-05-14 01:50:17.298509 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-05-14 01:50:18.324597 | orchestrator | changed: [testbed-manager] 2025-05-14 01:50:18.324641 | orchestrator | 2025-05-14 01:50:18.324651 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-05-14 01:50:18.913717 | orchestrator | changed: [testbed-manager] 2025-05-14 01:50:18.913757 | orchestrator | 2025-05-14 01:50:18.913765 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-05-14 01:50:18.955766 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-14 01:50:18.955872 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-14 01:50:18.955888 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-14 01:50:18.955899 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-14 01:50:21.452536 | orchestrator | changed: [testbed-manager] 2025-05-14 01:50:21.452637 | orchestrator | 2025-05-14 01:50:21.452655 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-05-14 01:50:30.432792 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-05-14 01:50:30.432889 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-05-14 01:50:30.432906 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-05-14 01:50:30.432918 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-05-14 01:50:30.432930 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-05-14 01:50:30.432941 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-05-14 01:50:30.432952 | orchestrator | 2025-05-14 01:50:30.432965 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-05-14 01:50:31.490864 | orchestrator | changed: [testbed-manager] 2025-05-14 01:50:31.490905 | orchestrator | 2025-05-14 01:50:31.490913 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-05-14 01:50:31.534919 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:50:31.534957 | orchestrator | 2025-05-14 01:50:31.534965 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-05-14 01:50:34.606180 | orchestrator | changed: [testbed-manager] 2025-05-14 01:50:34.606288 | orchestrator | 2025-05-14 01:50:34.606306 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-05-14 01:50:34.648756 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:50:34.648856 | orchestrator | 2025-05-14 01:50:34.648874 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-05-14 01:52:09.950148 | orchestrator | changed: [testbed-manager] 2025-05-14 01:52:09.950422 | orchestrator | 2025-05-14 01:52:09.950454 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-14 01:52:10.925355 | orchestrator | ok: [testbed-manager] 2025-05-14 01:52:10.925443 | orchestrator | 2025-05-14 01:52:10.925460 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 01:52:10.925475 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-05-14 01:52:10.925486 | orchestrator | 2025-05-14 01:52:11.158902 | orchestrator | ok: Runtime: 0:02:16.200720 2025-05-14 01:52:11.175745 | 2025-05-14 01:52:11.175899 | TASK [Reboot manager] 2025-05-14 01:52:12.715554 | orchestrator | ok: Runtime: 0:00:00.902536 2025-05-14 01:52:12.732979 | 2025-05-14 01:52:12.733173 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-14 01:52:27.027835 | orchestrator | ok 2025-05-14 01:52:27.039670 | 2025-05-14 01:52:27.039807 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-14 01:53:27.084227 | orchestrator | ok 2025-05-14 01:53:27.094133 | 2025-05-14 01:53:27.094389 | TASK [Deploy manager + bootstrap nodes] 2025-05-14 01:53:29.743138 | orchestrator | 2025-05-14 01:53:29.743342 | orchestrator | # DEPLOY MANAGER 2025-05-14 01:53:29.743368 | orchestrator | 2025-05-14 01:53:29.743382 | orchestrator | + set -e 2025-05-14 01:53:29.743419 | orchestrator | + echo 2025-05-14 01:53:29.743446 | orchestrator | + echo '# DEPLOY MANAGER' 2025-05-14 01:53:29.743470 | orchestrator | + echo 2025-05-14 01:53:29.743530 | orchestrator | + cat /opt/manager-vars.sh 2025-05-14 01:53:29.747225 | orchestrator | export NUMBER_OF_NODES=6 2025-05-14 01:53:29.747258 | orchestrator | 2025-05-14 01:53:29.747270 | orchestrator | export CEPH_VERSION=reef 2025-05-14 01:53:29.747283 | orchestrator | export CONFIGURATION_VERSION=main 2025-05-14 01:53:29.747296 | orchestrator | export MANAGER_VERSION=8.1.0 2025-05-14 01:53:29.747319 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-05-14 01:53:29.747331 | orchestrator | 2025-05-14 01:53:29.747349 | orchestrator | export ARA=false 2025-05-14 01:53:29.747361 | orchestrator | export TEMPEST=false 2025-05-14 01:53:29.747379 | orchestrator | export IS_ZUUL=true 2025-05-14 01:53:29.747391 | orchestrator | 2025-05-14 01:53:29.747452 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.80 2025-05-14 01:53:29.747474 | orchestrator | export EXTERNAL_API=false 2025-05-14 01:53:29.747490 | orchestrator | 2025-05-14 01:53:29.747512 | orchestrator | export IMAGE_USER=ubuntu 2025-05-14 01:53:29.747524 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-05-14 01:53:29.747534 | orchestrator | 2025-05-14 01:53:29.747549 | orchestrator | export CEPH_STACK=ceph-ansible 2025-05-14 01:53:29.747567 | orchestrator | 2025-05-14 01:53:29.747578 | orchestrator | + echo 2025-05-14 01:53:29.747589 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-14 01:53:29.748487 | orchestrator | ++ export INTERACTIVE=false 2025-05-14 01:53:29.748510 | orchestrator | ++ INTERACTIVE=false 2025-05-14 01:53:29.748521 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-14 01:53:29.748532 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-14 01:53:29.748659 | orchestrator | + source /opt/manager-vars.sh 2025-05-14 01:53:29.748675 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-14 01:53:29.748686 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-14 01:53:29.748738 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-14 01:53:29.748750 | orchestrator | ++ CEPH_VERSION=reef 2025-05-14 01:53:29.748762 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-14 01:53:29.748773 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-14 01:53:29.748784 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-05-14 01:53:29.748794 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-05-14 01:53:29.748805 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-14 01:53:29.748819 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-14 01:53:29.748830 | orchestrator | ++ export ARA=false 2025-05-14 01:53:29.748848 | orchestrator | ++ ARA=false 2025-05-14 01:53:29.748869 | orchestrator | ++ export TEMPEST=false 2025-05-14 01:53:29.748880 | orchestrator | ++ TEMPEST=false 2025-05-14 01:53:29.748894 | orchestrator | ++ export IS_ZUUL=true 2025-05-14 01:53:29.748911 | orchestrator | ++ IS_ZUUL=true 2025-05-14 01:53:29.748922 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.80 2025-05-14 01:53:29.748934 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.80 2025-05-14 01:53:29.748944 | orchestrator | ++ export EXTERNAL_API=false 2025-05-14 01:53:29.748955 | orchestrator | ++ EXTERNAL_API=false 2025-05-14 01:53:29.748966 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-14 01:53:29.748976 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-14 01:53:29.748990 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-14 01:53:29.749001 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-14 01:53:29.749022 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-14 01:53:29.749033 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-14 01:53:29.749267 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-05-14 01:53:29.800320 | orchestrator | + docker version 2025-05-14 01:53:30.062141 | orchestrator | Client: Docker Engine - Community 2025-05-14 01:53:30.062243 | orchestrator | Version: 26.1.4 2025-05-14 01:53:30.062262 | orchestrator | API version: 1.45 2025-05-14 01:53:30.062274 | orchestrator | Go version: go1.21.11 2025-05-14 01:53:30.062285 | orchestrator | Git commit: 5650f9b 2025-05-14 01:53:30.062296 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-05-14 01:53:30.062308 | orchestrator | OS/Arch: linux/amd64 2025-05-14 01:53:30.062319 | orchestrator | Context: default 2025-05-14 01:53:30.062330 | orchestrator | 2025-05-14 01:53:30.062342 | orchestrator | Server: Docker Engine - Community 2025-05-14 01:53:30.062353 | orchestrator | Engine: 2025-05-14 01:53:30.062364 | orchestrator | Version: 26.1.4 2025-05-14 01:53:30.062375 | orchestrator | API version: 1.45 (minimum version 1.24) 2025-05-14 01:53:30.062386 | orchestrator | Go version: go1.21.11 2025-05-14 01:53:30.062397 | orchestrator | Git commit: de5c9cf 2025-05-14 01:53:30.062474 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-05-14 01:53:30.062486 | orchestrator | OS/Arch: linux/amd64 2025-05-14 01:53:30.062497 | orchestrator | Experimental: false 2025-05-14 01:53:30.062508 | orchestrator | containerd: 2025-05-14 01:53:30.062518 | orchestrator | Version: 1.7.27 2025-05-14 01:53:30.062529 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-05-14 01:53:30.062541 | orchestrator | runc: 2025-05-14 01:53:30.062551 | orchestrator | Version: 1.2.5 2025-05-14 01:53:30.062562 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-05-14 01:53:30.062573 | orchestrator | docker-init: 2025-05-14 01:53:30.062583 | orchestrator | Version: 0.19.0 2025-05-14 01:53:30.062594 | orchestrator | GitCommit: de40ad0 2025-05-14 01:53:30.064099 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-05-14 01:53:30.072370 | orchestrator | + set -e 2025-05-14 01:53:30.072440 | orchestrator | + source /opt/manager-vars.sh 2025-05-14 01:53:30.072461 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-14 01:53:30.072479 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-14 01:53:30.072498 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-14 01:53:30.072515 | orchestrator | ++ CEPH_VERSION=reef 2025-05-14 01:53:30.072533 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-14 01:53:30.072554 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-14 01:53:30.072572 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-05-14 01:53:30.072590 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-05-14 01:53:30.072608 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-14 01:53:30.072627 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-14 01:53:30.072646 | orchestrator | ++ export ARA=false 2025-05-14 01:53:30.072664 | orchestrator | ++ ARA=false 2025-05-14 01:53:30.072683 | orchestrator | ++ export TEMPEST=false 2025-05-14 01:53:30.072702 | orchestrator | ++ TEMPEST=false 2025-05-14 01:53:30.072719 | orchestrator | ++ export IS_ZUUL=true 2025-05-14 01:53:30.072738 | orchestrator | ++ IS_ZUUL=true 2025-05-14 01:53:30.072750 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.80 2025-05-14 01:53:30.072761 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.80 2025-05-14 01:53:30.072772 | orchestrator | ++ export EXTERNAL_API=false 2025-05-14 01:53:30.072783 | orchestrator | ++ EXTERNAL_API=false 2025-05-14 01:53:30.072793 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-14 01:53:30.072804 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-14 01:53:30.072815 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-14 01:53:30.072825 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-14 01:53:30.072836 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-14 01:53:30.072846 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-14 01:53:30.072857 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-14 01:53:30.072868 | orchestrator | ++ export INTERACTIVE=false 2025-05-14 01:53:30.072878 | orchestrator | ++ INTERACTIVE=false 2025-05-14 01:53:30.072889 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-14 01:53:30.072900 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-14 01:53:30.072918 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-14 01:53:30.072929 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 8.1.0 2025-05-14 01:53:30.077800 | orchestrator | + set -e 2025-05-14 01:53:30.077826 | orchestrator | + VERSION=8.1.0 2025-05-14 01:53:30.077842 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 8.1.0/g' /opt/configuration/environments/manager/configuration.yml 2025-05-14 01:53:30.085615 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-14 01:53:30.085648 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-05-14 01:53:30.090815 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-05-14 01:53:30.093799 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-05-14 01:53:30.098650 | orchestrator | /opt/configuration ~ 2025-05-14 01:53:30.098696 | orchestrator | + set -e 2025-05-14 01:53:30.098712 | orchestrator | + pushd /opt/configuration 2025-05-14 01:53:30.098726 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-14 01:53:30.100940 | orchestrator | + source /opt/venv/bin/activate 2025-05-14 01:53:30.101860 | orchestrator | ++ deactivate nondestructive 2025-05-14 01:53:30.101889 | orchestrator | ++ '[' -n '' ']' 2025-05-14 01:53:30.101898 | orchestrator | ++ '[' -n '' ']' 2025-05-14 01:53:30.101906 | orchestrator | ++ hash -r 2025-05-14 01:53:30.101914 | orchestrator | ++ '[' -n '' ']' 2025-05-14 01:53:30.101921 | orchestrator | ++ unset VIRTUAL_ENV 2025-05-14 01:53:30.101930 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-05-14 01:53:30.101938 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-05-14 01:53:30.101946 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-05-14 01:53:30.101979 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-05-14 01:53:30.101988 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-05-14 01:53:30.101996 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-05-14 01:53:30.102005 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-14 01:53:30.102045 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-14 01:53:30.102055 | orchestrator | ++ export PATH 2025-05-14 01:53:30.102063 | orchestrator | ++ '[' -n '' ']' 2025-05-14 01:53:30.102071 | orchestrator | ++ '[' -z '' ']' 2025-05-14 01:53:30.102079 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-05-14 01:53:30.102086 | orchestrator | ++ PS1='(venv) ' 2025-05-14 01:53:30.102094 | orchestrator | ++ export PS1 2025-05-14 01:53:30.102102 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-05-14 01:53:30.102110 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-05-14 01:53:30.102118 | orchestrator | ++ hash -r 2025-05-14 01:53:30.102136 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-05-14 01:53:31.196060 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-05-14 01:53:31.198140 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.3) 2025-05-14 01:53:31.199049 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-05-14 01:53:31.200685 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-05-14 01:53:31.201963 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2025-05-14 01:53:31.212024 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.2.0) 2025-05-14 01:53:31.213898 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-05-14 01:53:31.215220 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.19) 2025-05-14 01:53:31.216344 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-05-14 01:53:31.248366 | orchestrator | Requirement already satisfied: charset-normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.2) 2025-05-14 01:53:31.249948 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-05-14 01:53:31.251587 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.4.0) 2025-05-14 01:53:31.253292 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.4.26) 2025-05-14 01:53:31.257690 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-05-14 01:53:31.496942 | orchestrator | ++ which gilt 2025-05-14 01:53:31.500070 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-05-14 01:53:31.500097 | orchestrator | + /opt/venv/bin/gilt overlay 2025-05-14 01:53:31.746901 | orchestrator | osism.cfg-generics: 2025-05-14 01:53:31.747019 | orchestrator | - cloning osism.cfg-generics to /home/dragon/.gilt/clone/github.com/osism.cfg-generics 2025-05-14 01:53:33.268569 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-05-14 01:53:33.268708 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-05-14 01:53:33.269271 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-05-14 01:53:33.269471 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-05-14 01:53:33.992639 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-05-14 01:53:34.003916 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-05-14 01:53:34.342112 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-05-14 01:53:34.411773 | orchestrator | ~ 2025-05-14 01:53:34.411871 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-14 01:53:34.411883 | orchestrator | + deactivate 2025-05-14 01:53:34.411892 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-05-14 01:53:34.411903 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-14 01:53:34.411911 | orchestrator | + export PATH 2025-05-14 01:53:34.411919 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-05-14 01:53:34.411927 | orchestrator | + '[' -n '' ']' 2025-05-14 01:53:34.411934 | orchestrator | + hash -r 2025-05-14 01:53:34.411941 | orchestrator | + '[' -n '' ']' 2025-05-14 01:53:34.411948 | orchestrator | + unset VIRTUAL_ENV 2025-05-14 01:53:34.411955 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-05-14 01:53:34.411963 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-05-14 01:53:34.411970 | orchestrator | + unset -f deactivate 2025-05-14 01:53:34.411978 | orchestrator | + popd 2025-05-14 01:53:34.412579 | orchestrator | + [[ 8.1.0 == \l\a\t\e\s\t ]] 2025-05-14 01:53:34.412619 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-05-14 01:53:34.413148 | orchestrator | ++ semver 8.1.0 7.0.0 2025-05-14 01:53:34.467308 | orchestrator | + [[ 1 -ge 0 ]] 2025-05-14 01:53:34.467463 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-05-14 01:53:34.467485 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-05-14 01:53:34.501451 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-14 01:53:34.501510 | orchestrator | + source /opt/venv/bin/activate 2025-05-14 01:53:34.501522 | orchestrator | ++ deactivate nondestructive 2025-05-14 01:53:34.501551 | orchestrator | ++ '[' -n '' ']' 2025-05-14 01:53:34.501563 | orchestrator | ++ '[' -n '' ']' 2025-05-14 01:53:34.501574 | orchestrator | ++ hash -r 2025-05-14 01:53:34.501585 | orchestrator | ++ '[' -n '' ']' 2025-05-14 01:53:34.501596 | orchestrator | ++ unset VIRTUAL_ENV 2025-05-14 01:53:34.501613 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-05-14 01:53:34.501631 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-05-14 01:53:34.501646 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-05-14 01:53:34.501658 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-05-14 01:53:34.501669 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-05-14 01:53:34.501680 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-05-14 01:53:34.501805 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-14 01:53:34.501820 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-14 01:53:34.501887 | orchestrator | ++ export PATH 2025-05-14 01:53:34.501941 | orchestrator | ++ '[' -n '' ']' 2025-05-14 01:53:34.502066 | orchestrator | ++ '[' -z '' ']' 2025-05-14 01:53:34.502081 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-05-14 01:53:34.502178 | orchestrator | ++ PS1='(venv) ' 2025-05-14 01:53:34.502191 | orchestrator | ++ export PS1 2025-05-14 01:53:34.502209 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-05-14 01:53:34.502220 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-05-14 01:53:34.502235 | orchestrator | ++ hash -r 2025-05-14 01:53:34.502637 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-05-14 01:53:35.780572 | orchestrator | 2025-05-14 01:53:35.780703 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-05-14 01:53:35.780720 | orchestrator | 2025-05-14 01:53:35.780733 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-14 01:53:36.337654 | orchestrator | ok: [testbed-manager] 2025-05-14 01:53:36.337755 | orchestrator | 2025-05-14 01:53:36.337770 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-14 01:53:37.312539 | orchestrator | changed: [testbed-manager] 2025-05-14 01:53:37.312664 | orchestrator | 2025-05-14 01:53:37.312681 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-05-14 01:53:37.312694 | orchestrator | 2025-05-14 01:53:37.312706 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-14 01:53:39.668286 | orchestrator | ok: [testbed-manager] 2025-05-14 01:53:39.668398 | orchestrator | 2025-05-14 01:53:39.668442 | orchestrator | TASK [Pull images] ************************************************************* 2025-05-14 01:53:45.224743 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ara-server:1.7.2) 2025-05-14 01:53:45.224854 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/mariadb:11.6.2) 2025-05-14 01:53:45.224870 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ceph-ansible:8.1.0) 2025-05-14 01:53:45.224881 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/inventory-reconciler:8.1.0) 2025-05-14 01:53:45.224893 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/kolla-ansible:8.1.0) 2025-05-14 01:53:45.224907 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/redis:7.4.1-alpine) 2025-05-14 01:53:45.224919 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/netbox:v4.1.7) 2025-05-14 01:53:45.224933 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism-ansible:8.1.0) 2025-05-14 01:53:45.224944 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism:0.20241219.2) 2025-05-14 01:53:45.224954 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/postgres:16.6-alpine) 2025-05-14 01:53:45.224966 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/traefik:v3.2.1) 2025-05-14 01:53:45.224976 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/hashicorp/vault:1.18.2) 2025-05-14 01:53:45.224987 | orchestrator | 2025-05-14 01:53:45.224999 | orchestrator | TASK [Check status] ************************************************************ 2025-05-14 01:55:01.539478 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-14 01:55:01.539656 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-05-14 01:55:01.539676 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (118 retries left). 2025-05-14 01:55:01.539688 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (117 retries left). 2025-05-14 01:55:01.539713 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j126226032065.1589', 'results_file': '/home/dragon/.ansible_async/j126226032065.1589', 'changed': True, 'item': 'registry.osism.tech/osism/ara-server:1.7.2', 'ansible_loop_var': 'item'}) 2025-05-14 01:55:01.539733 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j888295572491.1614', 'results_file': '/home/dragon/.ansible_async/j888295572491.1614', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/mariadb:11.6.2', 'ansible_loop_var': 'item'}) 2025-05-14 01:55:01.539750 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-14 01:55:01.539761 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-05-14 01:55:01.539773 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j367001526382.1639', 'results_file': '/home/dragon/.ansible_async/j367001526382.1639', 'changed': True, 'item': 'registry.osism.tech/osism/ceph-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-14 01:55:01.539784 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j168722190615.1671', 'results_file': '/home/dragon/.ansible_async/j168722190615.1671', 'changed': True, 'item': 'registry.osism.tech/osism/inventory-reconciler:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-14 01:55:01.539796 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j665144866918.1704', 'results_file': '/home/dragon/.ansible_async/j665144866918.1704', 'changed': True, 'item': 'registry.osism.tech/osism/kolla-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-14 01:55:01.539807 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j659578905795.1736', 'results_file': '/home/dragon/.ansible_async/j659578905795.1736', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/redis:7.4.1-alpine', 'ansible_loop_var': 'item'}) 2025-05-14 01:55:01.539818 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-14 01:55:01.539858 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j358454524452.1768', 'results_file': '/home/dragon/.ansible_async/j358454524452.1768', 'changed': True, 'item': 'registry.osism.tech/osism/netbox:v4.1.7', 'ansible_loop_var': 'item'}) 2025-05-14 01:55:01.539870 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j125382273862.1807', 'results_file': '/home/dragon/.ansible_async/j125382273862.1807', 'changed': True, 'item': 'registry.osism.tech/osism/osism-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-14 01:55:01.539881 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j49513400563.1841', 'results_file': '/home/dragon/.ansible_async/j49513400563.1841', 'changed': True, 'item': 'registry.osism.tech/osism/osism:0.20241219.2', 'ansible_loop_var': 'item'}) 2025-05-14 01:55:01.539892 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j619529441921.1874', 'results_file': '/home/dragon/.ansible_async/j619529441921.1874', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/postgres:16.6-alpine', 'ansible_loop_var': 'item'}) 2025-05-14 01:55:01.539904 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j157795557639.1899', 'results_file': '/home/dragon/.ansible_async/j157795557639.1899', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/traefik:v3.2.1', 'ansible_loop_var': 'item'}) 2025-05-14 01:55:01.539915 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j954591361641.1933', 'results_file': '/home/dragon/.ansible_async/j954591361641.1933', 'changed': True, 'item': 'registry.osism.tech/dockerhub/hashicorp/vault:1.18.2', 'ansible_loop_var': 'item'}) 2025-05-14 01:55:01.539926 | orchestrator | 2025-05-14 01:55:01.539938 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-05-14 01:55:01.587935 | orchestrator | ok: [testbed-manager] 2025-05-14 01:55:01.588014 | orchestrator | 2025-05-14 01:55:01.588030 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-05-14 01:55:02.182789 | orchestrator | changed: [testbed-manager] 2025-05-14 01:55:02.182894 | orchestrator | 2025-05-14 01:55:02.182910 | orchestrator | TASK [Add netbox_postgres_volume_type parameter] ******************************* 2025-05-14 01:55:02.554247 | orchestrator | changed: [testbed-manager] 2025-05-14 01:55:02.554341 | orchestrator | 2025-05-14 01:55:02.554355 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-14 01:55:02.925759 | orchestrator | changed: [testbed-manager] 2025-05-14 01:55:02.925857 | orchestrator | 2025-05-14 01:55:02.925872 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-05-14 01:55:02.994491 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:55:02.994622 | orchestrator | 2025-05-14 01:55:02.994638 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-05-14 01:55:03.355951 | orchestrator | ok: [testbed-manager] 2025-05-14 01:55:03.356050 | orchestrator | 2025-05-14 01:55:03.356065 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-05-14 01:55:03.477987 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:55:03.478136 | orchestrator | 2025-05-14 01:55:03.478153 | orchestrator | PLAY [Apply role traefik & netbox] ********************************************* 2025-05-14 01:55:03.478165 | orchestrator | 2025-05-14 01:55:03.478177 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-14 01:55:05.403447 | orchestrator | ok: [testbed-manager] 2025-05-14 01:55:05.403587 | orchestrator | 2025-05-14 01:55:05.403616 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-05-14 01:55:05.526148 | orchestrator | included: osism.services.traefik for testbed-manager 2025-05-14 01:55:05.526251 | orchestrator | 2025-05-14 01:55:05.526274 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-05-14 01:55:05.598475 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-05-14 01:55:05.598682 | orchestrator | 2025-05-14 01:55:05.598702 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-05-14 01:55:06.753327 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-05-14 01:55:06.753439 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-05-14 01:55:06.753455 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-05-14 01:55:06.753468 | orchestrator | 2025-05-14 01:55:06.753480 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-05-14 01:55:08.729863 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-05-14 01:55:08.729968 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-05-14 01:55:08.729983 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-05-14 01:55:08.729996 | orchestrator | 2025-05-14 01:55:08.730008 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-05-14 01:55:09.402813 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-14 01:55:09.402919 | orchestrator | changed: [testbed-manager] 2025-05-14 01:55:09.402936 | orchestrator | 2025-05-14 01:55:09.402971 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-05-14 01:55:10.074194 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-14 01:55:10.074296 | orchestrator | changed: [testbed-manager] 2025-05-14 01:55:10.074314 | orchestrator | 2025-05-14 01:55:10.074327 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-05-14 01:55:10.130280 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:55:10.130357 | orchestrator | 2025-05-14 01:55:10.130371 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-05-14 01:55:10.519394 | orchestrator | ok: [testbed-manager] 2025-05-14 01:55:10.519488 | orchestrator | 2025-05-14 01:55:10.519503 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-05-14 01:55:10.592093 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-05-14 01:55:10.592168 | orchestrator | 2025-05-14 01:55:10.592182 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-05-14 01:55:11.729394 | orchestrator | changed: [testbed-manager] 2025-05-14 01:55:11.729511 | orchestrator | 2025-05-14 01:55:11.729528 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-05-14 01:55:12.558715 | orchestrator | changed: [testbed-manager] 2025-05-14 01:55:12.558820 | orchestrator | 2025-05-14 01:55:12.558837 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-05-14 01:55:15.276838 | orchestrator | changed: [testbed-manager] 2025-05-14 01:55:15.276940 | orchestrator | 2025-05-14 01:55:15.276956 | orchestrator | TASK [Apply netbox role] ******************************************************* 2025-05-14 01:55:15.406309 | orchestrator | included: osism.services.netbox for testbed-manager 2025-05-14 01:55:15.406454 | orchestrator | 2025-05-14 01:55:15.406470 | orchestrator | TASK [osism.services.netbox : Include install tasks] *************************** 2025-05-14 01:55:15.504434 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/install-Debian-family.yml for testbed-manager 2025-05-14 01:55:15.504545 | orchestrator | 2025-05-14 01:55:15.504633 | orchestrator | TASK [osism.services.netbox : Install required packages] *********************** 2025-05-14 01:55:18.172145 | orchestrator | ok: [testbed-manager] 2025-05-14 01:55:18.172262 | orchestrator | 2025-05-14 01:55:18.172279 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-05-14 01:55:18.273958 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config.yml for testbed-manager 2025-05-14 01:55:18.274111 | orchestrator | 2025-05-14 01:55:18.274128 | orchestrator | TASK [osism.services.netbox : Create required directories] ********************* 2025-05-14 01:55:19.406497 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox) 2025-05-14 01:55:19.406634 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration) 2025-05-14 01:55:19.406650 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/secrets) 2025-05-14 01:55:19.406693 | orchestrator | 2025-05-14 01:55:19.406707 | orchestrator | TASK [osism.services.netbox : Include postgres config tasks] ******************* 2025-05-14 01:55:19.475669 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-postgres.yml for testbed-manager 2025-05-14 01:55:19.475762 | orchestrator | 2025-05-14 01:55:19.475776 | orchestrator | TASK [osism.services.netbox : Copy postgres environment files] ***************** 2025-05-14 01:55:20.160906 | orchestrator | changed: [testbed-manager] => (item=postgres) 2025-05-14 01:55:20.161009 | orchestrator | 2025-05-14 01:55:20.161025 | orchestrator | TASK [osism.services.netbox : Copy postgres configuration file] **************** 2025-05-14 01:55:20.794107 | orchestrator | changed: [testbed-manager] 2025-05-14 01:55:20.794209 | orchestrator | 2025-05-14 01:55:20.794226 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-05-14 01:55:21.440397 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-14 01:55:21.440504 | orchestrator | changed: [testbed-manager] 2025-05-14 01:55:21.440523 | orchestrator | 2025-05-14 01:55:21.440537 | orchestrator | TASK [osism.services.netbox : Create docker-entrypoint-initdb.d directory] ***** 2025-05-14 01:55:21.849657 | orchestrator | changed: [testbed-manager] 2025-05-14 01:55:21.849772 | orchestrator | 2025-05-14 01:55:21.849796 | orchestrator | TASK [osism.services.netbox : Check if init.sql file exists] ******************* 2025-05-14 01:55:22.211360 | orchestrator | ok: [testbed-manager] 2025-05-14 01:55:22.211459 | orchestrator | 2025-05-14 01:55:22.211474 | orchestrator | TASK [osism.services.netbox : Copy init.sql file] ****************************** 2025-05-14 01:55:22.262414 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:55:22.262504 | orchestrator | 2025-05-14 01:55:22.262520 | orchestrator | TASK [osism.services.netbox : Create init-netbox-database.sh script] *********** 2025-05-14 01:55:22.935320 | orchestrator | changed: [testbed-manager] 2025-05-14 01:55:22.935426 | orchestrator | 2025-05-14 01:55:22.935442 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-05-14 01:55:23.018307 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-netbox.yml for testbed-manager 2025-05-14 01:55:23.018398 | orchestrator | 2025-05-14 01:55:23.018413 | orchestrator | TASK [osism.services.netbox : Create directories required by netbox] *********** 2025-05-14 01:55:23.787577 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/initializers) 2025-05-14 01:55:23.787728 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/startup-scripts) 2025-05-14 01:55:23.787743 | orchestrator | 2025-05-14 01:55:23.787756 | orchestrator | TASK [osism.services.netbox : Copy netbox environment files] ******************* 2025-05-14 01:55:24.463452 | orchestrator | changed: [testbed-manager] => (item=netbox) 2025-05-14 01:55:24.463554 | orchestrator | 2025-05-14 01:55:24.463571 | orchestrator | TASK [osism.services.netbox : Copy netbox configuration file] ****************** 2025-05-14 01:55:25.127189 | orchestrator | changed: [testbed-manager] 2025-05-14 01:55:25.127271 | orchestrator | 2025-05-14 01:55:25.127280 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (<= 1.26)] **** 2025-05-14 01:55:25.169361 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:55:25.169419 | orchestrator | 2025-05-14 01:55:25.169431 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (> 1.26)] ***** 2025-05-14 01:55:25.817264 | orchestrator | changed: [testbed-manager] 2025-05-14 01:55:25.817371 | orchestrator | 2025-05-14 01:55:25.817388 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-05-14 01:55:27.646579 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-14 01:55:27.646725 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-14 01:55:27.646740 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-14 01:55:27.646753 | orchestrator | changed: [testbed-manager] 2025-05-14 01:55:27.646766 | orchestrator | 2025-05-14 01:55:27.646778 | orchestrator | TASK [osism.services.netbox : Deploy initializers for netbox] ****************** 2025-05-14 01:55:33.306493 | orchestrator | changed: [testbed-manager] => (item=custom_fields) 2025-05-14 01:55:33.306666 | orchestrator | changed: [testbed-manager] => (item=device_roles) 2025-05-14 01:55:33.306686 | orchestrator | changed: [testbed-manager] => (item=device_types) 2025-05-14 01:55:33.306703 | orchestrator | changed: [testbed-manager] => (item=groups) 2025-05-14 01:55:33.306757 | orchestrator | changed: [testbed-manager] => (item=manufacturers) 2025-05-14 01:55:33.306779 | orchestrator | changed: [testbed-manager] => (item=object_permissions) 2025-05-14 01:55:33.306799 | orchestrator | changed: [testbed-manager] => (item=prefix_vlan_roles) 2025-05-14 01:55:33.306838 | orchestrator | changed: [testbed-manager] => (item=sites) 2025-05-14 01:55:33.306851 | orchestrator | changed: [testbed-manager] => (item=tags) 2025-05-14 01:55:33.306862 | orchestrator | changed: [testbed-manager] => (item=users) 2025-05-14 01:55:33.306874 | orchestrator | 2025-05-14 01:55:33.306886 | orchestrator | TASK [osism.services.netbox : Deploy startup scripts for netbox] *************** 2025-05-14 01:55:33.935685 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/files/startup-scripts/270_tags.py) 2025-05-14 01:55:33.935816 | orchestrator | 2025-05-14 01:55:33.935833 | orchestrator | TASK [osism.services.netbox : Include service tasks] *************************** 2025-05-14 01:55:34.028345 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/service.yml for testbed-manager 2025-05-14 01:55:34.028464 | orchestrator | 2025-05-14 01:55:34.028479 | orchestrator | TASK [osism.services.netbox : Copy netbox systemd unit file] ******************* 2025-05-14 01:55:34.761169 | orchestrator | changed: [testbed-manager] 2025-05-14 01:55:34.761297 | orchestrator | 2025-05-14 01:55:34.761313 | orchestrator | TASK [osism.services.netbox : Create traefik external network] ***************** 2025-05-14 01:55:35.404408 | orchestrator | ok: [testbed-manager] 2025-05-14 01:55:35.404501 | orchestrator | 2025-05-14 01:55:35.404519 | orchestrator | TASK [osism.services.netbox : Copy docker-compose.yml file] ******************** 2025-05-14 01:55:36.199348 | orchestrator | changed: [testbed-manager] 2025-05-14 01:55:36.199476 | orchestrator | 2025-05-14 01:55:36.199491 | orchestrator | TASK [osism.services.netbox : Pull container images] *************************** 2025-05-14 01:55:38.674263 | orchestrator | ok: [testbed-manager] 2025-05-14 01:55:38.674370 | orchestrator | 2025-05-14 01:55:38.674378 | orchestrator | TASK [osism.services.netbox : Stop and disable old service docker-compose@netbox] *** 2025-05-14 01:55:39.649339 | orchestrator | ok: [testbed-manager] 2025-05-14 01:55:39.649469 | orchestrator | 2025-05-14 01:55:39.649485 | orchestrator | TASK [osism.services.netbox : Manage netbox service] *************************** 2025-05-14 01:56:01.912228 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage netbox service (10 retries left). 2025-05-14 01:56:01.912354 | orchestrator | ok: [testbed-manager] 2025-05-14 01:56:01.912372 | orchestrator | 2025-05-14 01:56:01.912385 | orchestrator | TASK [osism.services.netbox : Register that netbox service was started] ******** 2025-05-14 01:56:01.978580 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:56:01.978711 | orchestrator | 2025-05-14 01:56:01.978727 | orchestrator | TASK [osism.services.netbox : Flush handlers] ********************************** 2025-05-14 01:56:01.978739 | orchestrator | 2025-05-14 01:56:01.978750 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-05-14 01:56:02.035453 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:56:02.035531 | orchestrator | 2025-05-14 01:56:02.035547 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-05-14 01:56:02.100364 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/restart-service.yml for testbed-manager 2025-05-14 01:56:02.100458 | orchestrator | 2025-05-14 01:56:02.100474 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres container] ****** 2025-05-14 01:56:02.922096 | orchestrator | ok: [testbed-manager] 2025-05-14 01:56:02.922215 | orchestrator | 2025-05-14 01:56:02.922231 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres container version fact] *** 2025-05-14 01:56:02.991318 | orchestrator | ok: [testbed-manager] 2025-05-14 01:56:02.991414 | orchestrator | 2025-05-14 01:56:02.991429 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres container] *** 2025-05-14 01:56:03.051031 | orchestrator | ok: [testbed-manager] => { 2025-05-14 01:56:03.051125 | orchestrator | "msg": "The major version of the running postgres container is 16" 2025-05-14 01:56:03.051141 | orchestrator | } 2025-05-14 01:56:03.051154 | orchestrator | 2025-05-14 01:56:03.051165 | orchestrator | RUNNING HANDLER [osism.services.netbox : Pull postgres image] ****************** 2025-05-14 01:56:03.709720 | orchestrator | ok: [testbed-manager] 2025-05-14 01:56:03.709857 | orchestrator | 2025-05-14 01:56:03.709875 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres image] ********** 2025-05-14 01:56:04.544759 | orchestrator | ok: [testbed-manager] 2025-05-14 01:56:04.544859 | orchestrator | 2025-05-14 01:56:04.544875 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres image version fact] ****** 2025-05-14 01:56:04.611595 | orchestrator | ok: [testbed-manager] 2025-05-14 01:56:04.611743 | orchestrator | 2025-05-14 01:56:04.611761 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres image] *** 2025-05-14 01:56:04.659577 | orchestrator | ok: [testbed-manager] => { 2025-05-14 01:56:04.659698 | orchestrator | "msg": "The major version of the postgres image is 16" 2025-05-14 01:56:04.659714 | orchestrator | } 2025-05-14 01:56:04.659726 | orchestrator | 2025-05-14 01:56:04.659737 | orchestrator | RUNNING HANDLER [osism.services.netbox : Stop netbox service] ****************** 2025-05-14 01:56:04.717011 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:56:04.717059 | orchestrator | 2025-05-14 01:56:04.717072 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to stop] ****** 2025-05-14 01:56:04.774308 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:56:04.774378 | orchestrator | 2025-05-14 01:56:04.774391 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres volume] ********* 2025-05-14 01:56:04.833507 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:56:04.833590 | orchestrator | 2025-05-14 01:56:04.833604 | orchestrator | RUNNING HANDLER [osism.services.netbox : Upgrade postgres database] ************ 2025-05-14 01:56:04.890511 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:56:04.890551 | orchestrator | 2025-05-14 01:56:04.890564 | orchestrator | RUNNING HANDLER [osism.services.netbox : Remove netbox-pgautoupgrade container] *** 2025-05-14 01:56:04.956292 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:56:04.956356 | orchestrator | 2025-05-14 01:56:04.956369 | orchestrator | RUNNING HANDLER [osism.services.netbox : Start netbox service] ***************** 2025-05-14 01:56:05.127325 | orchestrator | skipping: [testbed-manager] 2025-05-14 01:56:05.127419 | orchestrator | 2025-05-14 01:56:05.127439 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-05-14 01:56:06.381108 | orchestrator | changed: [testbed-manager] 2025-05-14 01:56:06.381207 | orchestrator | 2025-05-14 01:56:06.381224 | orchestrator | RUNNING HANDLER [osism.services.netbox : Register that netbox service was started] *** 2025-05-14 01:56:06.447611 | orchestrator | ok: [testbed-manager] 2025-05-14 01:56:06.447728 | orchestrator | 2025-05-14 01:56:06.447742 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to start] ***** 2025-05-14 01:57:06.504607 | orchestrator | Pausing for 60 seconds 2025-05-14 01:57:06.504816 | orchestrator | changed: [testbed-manager] 2025-05-14 01:57:06.504836 | orchestrator | 2025-05-14 01:57:06.504849 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for an healthy netbox service] *** 2025-05-14 01:57:06.573458 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/wait-for-healthy-service.yml for testbed-manager 2025-05-14 01:57:06.573543 | orchestrator | 2025-05-14 01:57:06.573556 | orchestrator | RUNNING HANDLER [osism.services.netbox : Check that all containers are in a good state] *** 2025-05-14 02:01:50.030961 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (60 retries left). 2025-05-14 02:01:50.031190 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (59 retries left). 2025-05-14 02:01:50.031213 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (58 retries left). 2025-05-14 02:01:50.031225 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (57 retries left). 2025-05-14 02:01:50.031236 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (56 retries left). 2025-05-14 02:01:50.031246 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (55 retries left). 2025-05-14 02:01:50.031257 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (54 retries left). 2025-05-14 02:01:50.031268 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (53 retries left). 2025-05-14 02:01:50.031278 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (52 retries left). 2025-05-14 02:01:50.031317 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (51 retries left). 2025-05-14 02:01:50.031328 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (50 retries left). 2025-05-14 02:01:50.031339 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (49 retries left). 2025-05-14 02:01:50.031350 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (48 retries left). 2025-05-14 02:01:50.031360 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (47 retries left). 2025-05-14 02:01:50.031371 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (46 retries left). 2025-05-14 02:01:50.031384 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (45 retries left). 2025-05-14 02:01:50.031394 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (44 retries left). 2025-05-14 02:01:50.031405 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (43 retries left). 2025-05-14 02:01:50.031415 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (42 retries left). 2025-05-14 02:01:50.031425 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (41 retries left). 2025-05-14 02:01:50.031436 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (40 retries left). 2025-05-14 02:01:50.031446 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (39 retries left). 2025-05-14 02:01:50.031457 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (38 retries left). 2025-05-14 02:01:50.031467 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (37 retries left). 2025-05-14 02:01:50.031478 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (36 retries left). 2025-05-14 02:01:50.031488 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (35 retries left). 2025-05-14 02:01:50.031501 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (34 retries left). 2025-05-14 02:01:50.031515 | orchestrator | changed: [testbed-manager] 2025-05-14 02:01:50.031529 | orchestrator | 2025-05-14 02:01:50.031543 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-05-14 02:01:50.031556 | orchestrator | 2025-05-14 02:01:50.031569 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-14 02:01:53.223456 | orchestrator | ok: [testbed-manager] 2025-05-14 02:01:53.223562 | orchestrator | 2025-05-14 02:01:53.223580 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-05-14 02:01:53.332562 | orchestrator | included: osism.services.manager for testbed-manager 2025-05-14 02:01:53.332652 | orchestrator | 2025-05-14 02:01:53.332666 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-05-14 02:01:53.405898 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-05-14 02:01:53.405983 | orchestrator | 2025-05-14 02:01:53.405997 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-05-14 02:01:55.376192 | orchestrator | ok: [testbed-manager] 2025-05-14 02:01:55.376303 | orchestrator | 2025-05-14 02:01:55.376322 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-05-14 02:01:55.423204 | orchestrator | ok: [testbed-manager] 2025-05-14 02:01:55.423287 | orchestrator | 2025-05-14 02:01:55.423302 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-05-14 02:01:55.520339 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-05-14 02:01:55.520422 | orchestrator | 2025-05-14 02:01:55.520462 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-05-14 02:01:58.510522 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-05-14 02:01:58.510631 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-05-14 02:01:58.510646 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-05-14 02:01:58.510659 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-05-14 02:01:58.510670 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-05-14 02:01:58.510682 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-05-14 02:01:58.510693 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-05-14 02:01:58.510704 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-05-14 02:01:58.510716 | orchestrator | 2025-05-14 02:01:58.510728 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-05-14 02:01:59.184532 | orchestrator | changed: [testbed-manager] 2025-05-14 02:01:59.184634 | orchestrator | 2025-05-14 02:01:59.184651 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-05-14 02:01:59.281823 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-05-14 02:01:59.281909 | orchestrator | 2025-05-14 02:01:59.281922 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-05-14 02:02:00.530622 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-05-14 02:02:00.530738 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-05-14 02:02:00.530753 | orchestrator | 2025-05-14 02:02:00.530766 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-05-14 02:02:01.233476 | orchestrator | changed: [testbed-manager] 2025-05-14 02:02:01.233583 | orchestrator | 2025-05-14 02:02:01.233621 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-05-14 02:02:01.311456 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:02:01.311555 | orchestrator | 2025-05-14 02:02:01.311571 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-05-14 02:02:01.389546 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-05-14 02:02:01.389651 | orchestrator | 2025-05-14 02:02:01.389674 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-05-14 02:02:02.819273 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-14 02:02:02.819369 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-14 02:02:02.819382 | orchestrator | changed: [testbed-manager] 2025-05-14 02:02:02.819393 | orchestrator | 2025-05-14 02:02:02.819403 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-05-14 02:02:03.475687 | orchestrator | changed: [testbed-manager] 2025-05-14 02:02:03.475788 | orchestrator | 2025-05-14 02:02:03.475804 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-05-14 02:02:03.557102 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-netbox.yml for testbed-manager 2025-05-14 02:02:03.557217 | orchestrator | 2025-05-14 02:02:03.557231 | orchestrator | TASK [osism.services.manager : Copy secret files] ****************************** 2025-05-14 02:02:04.702681 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-14 02:02:04.702790 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-14 02:02:04.702807 | orchestrator | changed: [testbed-manager] 2025-05-14 02:02:04.702821 | orchestrator | 2025-05-14 02:02:04.702834 | orchestrator | TASK [osism.services.manager : Copy netbox environment file] ******************* 2025-05-14 02:02:05.310331 | orchestrator | changed: [testbed-manager] 2025-05-14 02:02:05.310430 | orchestrator | 2025-05-14 02:02:05.310445 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-05-14 02:02:05.419227 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-05-14 02:02:05.419311 | orchestrator | 2025-05-14 02:02:05.419323 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-05-14 02:02:06.077544 | orchestrator | changed: [testbed-manager] 2025-05-14 02:02:06.077692 | orchestrator | 2025-05-14 02:02:06.077708 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-05-14 02:02:06.492335 | orchestrator | changed: [testbed-manager] 2025-05-14 02:02:06.492447 | orchestrator | 2025-05-14 02:02:06.492464 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-05-14 02:02:07.801674 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-05-14 02:02:07.801783 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-05-14 02:02:07.801800 | orchestrator | 2025-05-14 02:02:07.801813 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-05-14 02:02:08.625472 | orchestrator | changed: [testbed-manager] 2025-05-14 02:02:08.625571 | orchestrator | 2025-05-14 02:02:08.625587 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-05-14 02:02:09.059770 | orchestrator | ok: [testbed-manager] 2025-05-14 02:02:09.059868 | orchestrator | 2025-05-14 02:02:09.059884 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-05-14 02:02:09.452648 | orchestrator | changed: [testbed-manager] 2025-05-14 02:02:09.452748 | orchestrator | 2025-05-14 02:02:09.452763 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-05-14 02:02:09.502270 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:02:09.502334 | orchestrator | 2025-05-14 02:02:09.502350 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-05-14 02:02:09.584675 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-05-14 02:02:09.584755 | orchestrator | 2025-05-14 02:02:09.584770 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-05-14 02:02:09.638824 | orchestrator | ok: [testbed-manager] 2025-05-14 02:02:09.638890 | orchestrator | 2025-05-14 02:02:09.638904 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-05-14 02:02:11.806334 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-05-14 02:02:11.806452 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-05-14 02:02:11.806468 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-05-14 02:02:11.806480 | orchestrator | 2025-05-14 02:02:11.806492 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-05-14 02:02:12.572224 | orchestrator | changed: [testbed-manager] 2025-05-14 02:02:12.572324 | orchestrator | 2025-05-14 02:02:12.572339 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-05-14 02:02:13.331728 | orchestrator | changed: [testbed-manager] 2025-05-14 02:02:13.331832 | orchestrator | 2025-05-14 02:02:13.331849 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-05-14 02:02:14.089563 | orchestrator | changed: [testbed-manager] 2025-05-14 02:02:14.089639 | orchestrator | 2025-05-14 02:02:14.089646 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-05-14 02:02:14.179782 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-05-14 02:02:14.179877 | orchestrator | 2025-05-14 02:02:14.179892 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-05-14 02:02:14.224052 | orchestrator | ok: [testbed-manager] 2025-05-14 02:02:14.224140 | orchestrator | 2025-05-14 02:02:14.224178 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-05-14 02:02:14.951554 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-05-14 02:02:14.951660 | orchestrator | 2025-05-14 02:02:14.951677 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-05-14 02:02:15.061070 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-05-14 02:02:15.061223 | orchestrator | 2025-05-14 02:02:15.061262 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-05-14 02:02:15.861230 | orchestrator | changed: [testbed-manager] 2025-05-14 02:02:15.861352 | orchestrator | 2025-05-14 02:02:15.861368 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-05-14 02:02:16.576787 | orchestrator | ok: [testbed-manager] 2025-05-14 02:02:16.576884 | orchestrator | 2025-05-14 02:02:16.576898 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-05-14 02:02:16.637300 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:02:16.637333 | orchestrator | 2025-05-14 02:02:16.637346 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-05-14 02:02:16.713911 | orchestrator | ok: [testbed-manager] 2025-05-14 02:02:16.713965 | orchestrator | 2025-05-14 02:02:16.713978 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-05-14 02:02:17.649006 | orchestrator | changed: [testbed-manager] 2025-05-14 02:02:17.649111 | orchestrator | 2025-05-14 02:02:17.649128 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-05-14 02:02:57.331174 | orchestrator | changed: [testbed-manager] 2025-05-14 02:02:57.331337 | orchestrator | 2025-05-14 02:02:57.331355 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-05-14 02:02:58.008256 | orchestrator | ok: [testbed-manager] 2025-05-14 02:02:58.008359 | orchestrator | 2025-05-14 02:02:58.008375 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-05-14 02:03:00.883500 | orchestrator | changed: [testbed-manager] 2025-05-14 02:03:00.883614 | orchestrator | 2025-05-14 02:03:00.883631 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-05-14 02:03:00.954443 | orchestrator | ok: [testbed-manager] 2025-05-14 02:03:00.954518 | orchestrator | 2025-05-14 02:03:00.954525 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-05-14 02:03:00.954530 | orchestrator | 2025-05-14 02:03:00.954535 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-05-14 02:03:01.006843 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:03:01.006946 | orchestrator | 2025-05-14 02:03:01.006962 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-05-14 02:04:01.083280 | orchestrator | Pausing for 60 seconds 2025-05-14 02:04:01.083474 | orchestrator | changed: [testbed-manager] 2025-05-14 02:04:01.083489 | orchestrator | 2025-05-14 02:04:01.083502 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-05-14 02:04:06.609060 | orchestrator | changed: [testbed-manager] 2025-05-14 02:04:06.609201 | orchestrator | 2025-05-14 02:04:06.609236 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-05-14 02:04:48.267729 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-05-14 02:04:48.267854 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-05-14 02:04:48.267869 | orchestrator | changed: [testbed-manager] 2025-05-14 02:04:48.267882 | orchestrator | 2025-05-14 02:04:48.267894 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-05-14 02:04:54.567413 | orchestrator | changed: [testbed-manager] 2025-05-14 02:04:54.567517 | orchestrator | 2025-05-14 02:04:54.567531 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-05-14 02:04:54.649728 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-05-14 02:04:54.649834 | orchestrator | 2025-05-14 02:04:54.649854 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-05-14 02:04:54.649867 | orchestrator | 2025-05-14 02:04:54.649878 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-05-14 02:04:54.703510 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:04:54.703602 | orchestrator | 2025-05-14 02:04:54.703616 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:04:54.703628 | orchestrator | testbed-manager : ok=109 changed=57 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 2025-05-14 02:04:54.703639 | orchestrator | 2025-05-14 02:04:54.839043 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-14 02:04:54.839146 | orchestrator | + deactivate 2025-05-14 02:04:54.839170 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-05-14 02:04:54.839194 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-14 02:04:54.839251 | orchestrator | + export PATH 2025-05-14 02:04:54.839264 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-05-14 02:04:54.839276 | orchestrator | + '[' -n '' ']' 2025-05-14 02:04:54.839287 | orchestrator | + hash -r 2025-05-14 02:04:54.839298 | orchestrator | + '[' -n '' ']' 2025-05-14 02:04:54.839309 | orchestrator | + unset VIRTUAL_ENV 2025-05-14 02:04:54.839320 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-05-14 02:04:54.839331 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-05-14 02:04:54.839342 | orchestrator | + unset -f deactivate 2025-05-14 02:04:54.839354 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-05-14 02:04:54.844559 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-14 02:04:54.844594 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-05-14 02:04:54.844606 | orchestrator | + local max_attempts=60 2025-05-14 02:04:54.844617 | orchestrator | + local name=ceph-ansible 2025-05-14 02:04:54.844628 | orchestrator | + local attempt_num=1 2025-05-14 02:04:54.845234 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-05-14 02:04:54.873844 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-14 02:04:54.873902 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-05-14 02:04:54.873915 | orchestrator | + local max_attempts=60 2025-05-14 02:04:54.873927 | orchestrator | + local name=kolla-ansible 2025-05-14 02:04:54.873938 | orchestrator | + local attempt_num=1 2025-05-14 02:04:54.874487 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-05-14 02:04:54.899901 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-14 02:04:54.899944 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-05-14 02:04:54.899958 | orchestrator | + local max_attempts=60 2025-05-14 02:04:54.899970 | orchestrator | + local name=osism-ansible 2025-05-14 02:04:54.899981 | orchestrator | + local attempt_num=1 2025-05-14 02:04:54.900753 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-05-14 02:04:54.935493 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-14 02:04:54.935551 | orchestrator | + [[ true == \t\r\u\e ]] 2025-05-14 02:04:54.935564 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-05-14 02:04:55.690509 | orchestrator | ++ semver 8.1.0 9.0.0 2025-05-14 02:04:55.734158 | orchestrator | + [[ -1 -ge 0 ]] 2025-05-14 02:04:55.734235 | orchestrator | + [[ 8.1.0 == \l\a\t\e\s\t ]] 2025-05-14 02:04:55.734250 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-05-14 02:04:55.966317 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-05-14 02:04:55.966517 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:8.1.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-05-14 02:04:55.966570 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:8.1.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-05-14 02:04:55.966589 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-05-14 02:04:55.966604 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-05-14 02:04:55.966616 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" beat About a minute ago Up About a minute (healthy) 2025-05-14 02:04:55.966627 | orchestrator | manager-conductor-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" conductor About a minute ago Up About a minute (healthy) 2025-05-14 02:04:55.966638 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" flower About a minute ago Up About a minute (healthy) 2025-05-14 02:04:55.966682 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:8.1.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 49 seconds (healthy) 2025-05-14 02:04:55.966693 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" listener About a minute ago Up About a minute (healthy) 2025-05-14 02:04:55.966704 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.6.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-05-14 02:04:55.966715 | orchestrator | manager-netbox-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" netbox About a minute ago Up About a minute (healthy) 2025-05-14 02:04:55.966725 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" openstack About a minute ago Up About a minute (healthy) 2025-05-14 02:04:55.966736 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.1-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-05-14 02:04:55.966747 | orchestrator | manager-watchdog-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" watchdog About a minute ago Up About a minute (healthy) 2025-05-14 02:04:55.966758 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:8.1.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-05-14 02:04:55.966768 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:8.1.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-05-14 02:04:55.966779 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- sl…" osismclient About a minute ago Up About a minute (healthy) 2025-05-14 02:04:55.973616 | orchestrator | + docker compose --project-directory /opt/netbox ps 2025-05-14 02:04:56.134058 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-05-14 02:04:56.134146 | orchestrator | netbox-netbox-1 registry.osism.tech/osism/netbox:v4.1.7 "/usr/bin/tini -- /o…" netbox 9 minutes ago Up 8 minutes (healthy) 2025-05-14 02:04:56.134156 | orchestrator | netbox-netbox-worker-1 registry.osism.tech/osism/netbox:v4.1.7 "/opt/netbox/venv/bi…" netbox-worker 9 minutes ago Up 3 minutes (healthy) 2025-05-14 02:04:56.134164 | orchestrator | netbox-postgres-1 registry.osism.tech/dockerhub/library/postgres:16.6-alpine "docker-entrypoint.s…" postgres 9 minutes ago Up 8 minutes (healthy) 5432/tcp 2025-05-14 02:04:56.134173 | orchestrator | netbox-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.1-alpine "docker-entrypoint.s…" redis 9 minutes ago Up 8 minutes (healthy) 6379/tcp 2025-05-14 02:04:56.139843 | orchestrator | ++ semver 8.1.0 7.0.0 2025-05-14 02:04:56.179358 | orchestrator | + [[ 1 -ge 0 ]] 2025-05-14 02:04:56.179458 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-05-14 02:04:56.182311 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-05-14 02:04:57.858193 | orchestrator | 2025-05-14 02:04:57 | INFO  | Task 47dbe680-0b41-4c1c-bcf7-649a9e5f3f0b (resolvconf) was prepared for execution. 2025-05-14 02:04:57.858272 | orchestrator | 2025-05-14 02:04:57 | INFO  | It takes a moment until task 47dbe680-0b41-4c1c-bcf7-649a9e5f3f0b (resolvconf) has been started and output is visible here. 2025-05-14 02:05:00.932541 | orchestrator | 2025-05-14 02:05:00.932641 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-05-14 02:05:00.933650 | orchestrator | 2025-05-14 02:05:00.934558 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-14 02:05:00.934861 | orchestrator | Wednesday 14 May 2025 02:05:00 +0000 (0:00:00.091) 0:00:00.091 ********* 2025-05-14 02:05:04.772296 | orchestrator | ok: [testbed-manager] 2025-05-14 02:05:04.772465 | orchestrator | 2025-05-14 02:05:04.774522 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-14 02:05:04.774556 | orchestrator | Wednesday 14 May 2025 02:05:04 +0000 (0:00:03.841) 0:00:03.933 ********* 2025-05-14 02:05:04.819882 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:05:04.821027 | orchestrator | 2025-05-14 02:05:04.821057 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-14 02:05:04.821457 | orchestrator | Wednesday 14 May 2025 02:05:04 +0000 (0:00:00.053) 0:00:03.986 ********* 2025-05-14 02:05:04.917911 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-05-14 02:05:04.918084 | orchestrator | 2025-05-14 02:05:04.918344 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-14 02:05:04.919268 | orchestrator | Wednesday 14 May 2025 02:05:04 +0000 (0:00:00.097) 0:00:04.084 ********* 2025-05-14 02:05:05.000103 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-05-14 02:05:05.003976 | orchestrator | 2025-05-14 02:05:05.004221 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-14 02:05:05.004584 | orchestrator | Wednesday 14 May 2025 02:05:04 +0000 (0:00:00.082) 0:00:04.167 ********* 2025-05-14 02:05:05.952755 | orchestrator | ok: [testbed-manager] 2025-05-14 02:05:05.953028 | orchestrator | 2025-05-14 02:05:05.953661 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-14 02:05:05.954522 | orchestrator | Wednesday 14 May 2025 02:05:05 +0000 (0:00:00.951) 0:00:05.118 ********* 2025-05-14 02:05:06.007239 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:05:06.008194 | orchestrator | 2025-05-14 02:05:06.009183 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-14 02:05:06.009915 | orchestrator | Wednesday 14 May 2025 02:05:06 +0000 (0:00:00.054) 0:00:05.173 ********* 2025-05-14 02:05:06.428992 | orchestrator | ok: [testbed-manager] 2025-05-14 02:05:06.429117 | orchestrator | 2025-05-14 02:05:06.430113 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-14 02:05:06.432643 | orchestrator | Wednesday 14 May 2025 02:05:06 +0000 (0:00:00.419) 0:00:05.592 ********* 2025-05-14 02:05:06.507017 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:05:06.507110 | orchestrator | 2025-05-14 02:05:06.507126 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-14 02:05:06.507140 | orchestrator | Wednesday 14 May 2025 02:05:06 +0000 (0:00:00.079) 0:00:05.672 ********* 2025-05-14 02:05:07.061186 | orchestrator | changed: [testbed-manager] 2025-05-14 02:05:07.061315 | orchestrator | 2025-05-14 02:05:07.061341 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-14 02:05:07.061365 | orchestrator | Wednesday 14 May 2025 02:05:07 +0000 (0:00:00.553) 0:00:06.225 ********* 2025-05-14 02:05:08.285250 | orchestrator | changed: [testbed-manager] 2025-05-14 02:05:08.285334 | orchestrator | 2025-05-14 02:05:08.286281 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-14 02:05:08.286965 | orchestrator | Wednesday 14 May 2025 02:05:08 +0000 (0:00:01.223) 0:00:07.448 ********* 2025-05-14 02:05:09.310856 | orchestrator | ok: [testbed-manager] 2025-05-14 02:05:09.310963 | orchestrator | 2025-05-14 02:05:09.311629 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-14 02:05:09.312484 | orchestrator | Wednesday 14 May 2025 02:05:09 +0000 (0:00:01.026) 0:00:08.474 ********* 2025-05-14 02:05:09.395968 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-05-14 02:05:09.396463 | orchestrator | 2025-05-14 02:05:09.397179 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-14 02:05:09.397622 | orchestrator | Wednesday 14 May 2025 02:05:09 +0000 (0:00:00.086) 0:00:08.560 ********* 2025-05-14 02:05:10.570079 | orchestrator | changed: [testbed-manager] 2025-05-14 02:05:10.570484 | orchestrator | 2025-05-14 02:05:10.572550 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:05:10.573512 | orchestrator | 2025-05-14 02:05:10 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 02:05:10.573539 | orchestrator | 2025-05-14 02:05:10 | INFO  | Please wait and do not abort execution. 2025-05-14 02:05:10.574342 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 02:05:10.574869 | orchestrator | 2025-05-14 02:05:10.575843 | orchestrator | Wednesday 14 May 2025 02:05:10 +0000 (0:00:01.172) 0:00:09.733 ********* 2025-05-14 02:05:10.576631 | orchestrator | =============================================================================== 2025-05-14 02:05:10.577245 | orchestrator | Gathering Facts --------------------------------------------------------- 3.84s 2025-05-14 02:05:10.577765 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.22s 2025-05-14 02:05:10.578231 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.17s 2025-05-14 02:05:10.578676 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.03s 2025-05-14 02:05:10.579242 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 0.95s 2025-05-14 02:05:10.579628 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.55s 2025-05-14 02:05:10.580268 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.42s 2025-05-14 02:05:10.580603 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.10s 2025-05-14 02:05:10.581268 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2025-05-14 02:05:10.581725 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2025-05-14 02:05:10.582492 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-05-14 02:05:10.583240 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.05s 2025-05-14 02:05:10.583573 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.05s 2025-05-14 02:05:11.126232 | orchestrator | + osism apply sshconfig 2025-05-14 02:05:12.579917 | orchestrator | 2025-05-14 02:05:12 | INFO  | Task aaa796b6-9680-48f3-b88c-2a8faf938311 (sshconfig) was prepared for execution. 2025-05-14 02:05:12.580038 | orchestrator | 2025-05-14 02:05:12 | INFO  | It takes a moment until task aaa796b6-9680-48f3-b88c-2a8faf938311 (sshconfig) has been started and output is visible here. 2025-05-14 02:05:15.727925 | orchestrator | 2025-05-14 02:05:15.728895 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-05-14 02:05:15.728944 | orchestrator | 2025-05-14 02:05:15.729023 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-05-14 02:05:15.730594 | orchestrator | Wednesday 14 May 2025 02:05:15 +0000 (0:00:00.124) 0:00:00.124 ********* 2025-05-14 02:05:16.325996 | orchestrator | ok: [testbed-manager] 2025-05-14 02:05:16.326158 | orchestrator | 2025-05-14 02:05:16.326610 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-05-14 02:05:16.326638 | orchestrator | Wednesday 14 May 2025 02:05:16 +0000 (0:00:00.597) 0:00:00.721 ********* 2025-05-14 02:05:16.860242 | orchestrator | changed: [testbed-manager] 2025-05-14 02:05:16.860340 | orchestrator | 2025-05-14 02:05:16.860378 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-05-14 02:05:16.862750 | orchestrator | Wednesday 14 May 2025 02:05:16 +0000 (0:00:00.531) 0:00:01.253 ********* 2025-05-14 02:05:22.663049 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-05-14 02:05:22.663199 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-05-14 02:05:22.664183 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-05-14 02:05:22.665231 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-05-14 02:05:22.666297 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-05-14 02:05:22.667312 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-05-14 02:05:22.669039 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-05-14 02:05:22.670908 | orchestrator | 2025-05-14 02:05:22.671550 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-05-14 02:05:22.674087 | orchestrator | Wednesday 14 May 2025 02:05:22 +0000 (0:00:05.804) 0:00:07.058 ********* 2025-05-14 02:05:22.731700 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:05:22.731784 | orchestrator | 2025-05-14 02:05:22.732485 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-05-14 02:05:22.732628 | orchestrator | Wednesday 14 May 2025 02:05:22 +0000 (0:00:00.071) 0:00:07.130 ********* 2025-05-14 02:05:23.270776 | orchestrator | changed: [testbed-manager] 2025-05-14 02:05:23.271132 | orchestrator | 2025-05-14 02:05:23.272619 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:05:23.273494 | orchestrator | 2025-05-14 02:05:23 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 02:05:23.273532 | orchestrator | 2025-05-14 02:05:23 | INFO  | Please wait and do not abort execution. 2025-05-14 02:05:23.274204 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 02:05:23.274917 | orchestrator | 2025-05-14 02:05:23.275591 | orchestrator | Wednesday 14 May 2025 02:05:23 +0000 (0:00:00.539) 0:00:07.670 ********* 2025-05-14 02:05:23.276265 | orchestrator | =============================================================================== 2025-05-14 02:05:23.276698 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.80s 2025-05-14 02:05:23.277587 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.60s 2025-05-14 02:05:23.278364 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.54s 2025-05-14 02:05:23.278823 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.53s 2025-05-14 02:05:23.279693 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-05-14 02:05:23.563323 | orchestrator | + osism apply known-hosts 2025-05-14 02:05:24.883000 | orchestrator | 2025-05-14 02:05:24 | INFO  | Task 8dc74eeb-306d-453d-a51a-d61167351d43 (known-hosts) was prepared for execution. 2025-05-14 02:05:24.883101 | orchestrator | 2025-05-14 02:05:24 | INFO  | It takes a moment until task 8dc74eeb-306d-453d-a51a-d61167351d43 (known-hosts) has been started and output is visible here. 2025-05-14 02:05:27.693049 | orchestrator | 2025-05-14 02:05:27.693748 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-05-14 02:05:27.693782 | orchestrator | 2025-05-14 02:05:27.693808 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-05-14 02:05:27.693821 | orchestrator | Wednesday 14 May 2025 02:05:27 +0000 (0:00:00.105) 0:00:00.105 ********* 2025-05-14 02:05:33.833773 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-05-14 02:05:33.834120 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-05-14 02:05:33.835103 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-05-14 02:05:33.835115 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-05-14 02:05:33.836051 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-05-14 02:05:33.836623 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-05-14 02:05:33.837149 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-05-14 02:05:33.837596 | orchestrator | 2025-05-14 02:05:33.838182 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-05-14 02:05:33.838632 | orchestrator | Wednesday 14 May 2025 02:05:33 +0000 (0:00:06.144) 0:00:06.250 ********* 2025-05-14 02:05:33.998289 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-05-14 02:05:33.998379 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-05-14 02:05:33.999472 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-05-14 02:05:34.001288 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-05-14 02:05:34.001907 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-05-14 02:05:34.003204 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-05-14 02:05:34.004656 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-05-14 02:05:34.006291 | orchestrator | 2025-05-14 02:05:34.007774 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-14 02:05:34.011206 | orchestrator | Wednesday 14 May 2025 02:05:33 +0000 (0:00:00.166) 0:00:06.416 ********* 2025-05-14 02:05:35.293343 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDUM/l382VHyRq47iHvNxeoqa0eZR1p6U+a48bBBTBjt0OLa8wdYpNuCLQCXTYM0j1uLMTYSwj8+JlqagM7Q/HY=) 2025-05-14 02:05:35.295502 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC30WW10HFmc2okEm03Zz2GlQKHTM49Xy4blW9mRTU1fNMVMnzhB5+YTz5uKhWRbXPpzj4z1N5pZHabQpvxka8kJ4v9ApOgMs9tT/NocSGWuLD/Jrtk336DU/VFfxgHH27kEwdZ5QBh8zMoqKxsTTr2fBtYDapTBj6tfPhmlth1Fi+swlV4rTZNB5ZfaanPkr0XXGILTpkHNPj/tcvtXQrUi78/awGVySXG/oLmpk0sA9pUfywqnN4EcAdmzljjnzOoGMOgLn1d5+yh71GgBOfS05/5oMCRwudZqIEnpo5JL5Q/f66MR2ThlVYP8VpF55j4XYGmSp2VLBKB3QDAOC5RCANNuQ6E0HZCElCM01BkIO6T+gRTYTbqeRhY89mvh7iXYyDLxfAAtmvlfv+6u0MFcvmN3XOFuMaObMibXy6TMhjes+I19Cs0+McTSBKeQ1JllOlglqtwHJPkJv0s3SpOXVqw9n6+Qo12eu+RVaVlALS/9m1XEaMHOuTVaOr/Mts=) 2025-05-14 02:05:35.295543 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP74EA4FQAna2ONOruPOBeHytuHyCCaFm0RUTF567TvZ) 2025-05-14 02:05:35.295559 | orchestrator | 2025-05-14 02:05:35.296321 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-14 02:05:35.297193 | orchestrator | Wednesday 14 May 2025 02:05:35 +0000 (0:00:01.293) 0:00:07.710 ********* 2025-05-14 02:05:36.436360 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCttL+JbeJci1WxOZD01HQtUwOGGml21GspYfF4AeWZaXW2g76262PEh90rCuK6noMtm2D/KlGpSx4rVLSWuSaqCRdCRJbyPaicM1JJe9iBT8tPwdTG4rkAKttHVtg4/8fyIBT7cmeL1IpDGpiaGIpIQ8+0XJCT6pF3q4peZn6qgyS1X2UC2cvgL+AkyykS71An58gZEatA2Japtj08P2pNDtW9TxUW+0u789txf4arFdh+e5PsWdTSi+5UFZhoRllT4fLngd8DmuiX6VZvEEZ1ygSvwNfquuN6eDDoHoJFEZLftg5kAc9MbA/0Pno28+BSQgqLzTMNQB/qF6LK/EEhltJ8W6tfa6Seup8KXcWenH9n/et05iQFWe+oOWxAvMQXCQ/X3xh8QAX39iM24Gmi4m1DcPOsbQeRVgmUClfyijfgcEO0NigurDRDB+B/T6QyM8GgDOlYR8mAdN+ZnnZoXb+w0CX5VebkCOcCOMDaCGhWEFrNWG02krlgVQLKtoc=) 2025-05-14 02:05:36.436609 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNqrHl9EfxFLB9PFxc0t4vPCn8osELcKNxC6LVLktNqJ6nmPl63oB5Qx+eJPPnA/PFlSg0XOE3K9Vv4OzGBpPd8=) 2025-05-14 02:05:36.436630 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILzHWM0/pXgKG5idH0Ae1bVAwp0Cd/rLW7X5SXeSAYg+) 2025-05-14 02:05:36.436722 | orchestrator | 2025-05-14 02:05:36.437229 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-14 02:05:36.437502 | orchestrator | Wednesday 14 May 2025 02:05:36 +0000 (0:00:01.142) 0:00:08.853 ********* 2025-05-14 02:05:37.557615 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHAsCuff5+78WsOjyj2AReDNLp3vqmP3ssvMWAasHIlM) 2025-05-14 02:05:37.557746 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/pZOtrU32uOGXiQFRk6YRVIbEJfvW8UhF8FxEmKUVAviyC4MC56RhkVL8QNL1JI6jDqzCjWW9COmZ4ETdNvAfIpRrJ14ZXa/SadbKPV3AAG6q2xsrN4+XI80XLPXpBlObYiYAjt3OzxZmupEhkQw0N25g/VBzs8g5DdAp1jtfhBIdfWExraJG6ir3Ajvlc4aVo1YiWWsQcIFs6KWLXxrdcy60kNgSGfcmuC3AChYziqiCuUeiPPwdr4eFlL1buVLFB22E3EKdDKtOBGM72+lichBBEgP6NBF5Jab4zIDXwbQyfN++76Ye/EjqPcjr42mfMKJlUl+lu47rdE4DNTkuibty3rIs0kmtXwMVreTNOsl/9ODBNf+F1AXjSXLXe9wOiwphMd/WnIC2+Uz603ObZdRzWSttkNJELH/7nVHA62v5vt65YlAFgxoOFqFZRDKj98XhcmAmhpXRgGjLPJxqGe98+8nd1DLV3vGkDC0jIJehw0pECHsczgwf2L9Wb58=) 2025-05-14 02:05:37.558540 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIBCDIlFCpiy0pjBvzaUPgsWQ4qgdAwW91xm9INhCwUsdPdGaIzvHkGIGYmsmf98Td2HVy9pzTE4l5VXNg1dz94=) 2025-05-14 02:05:37.559301 | orchestrator | 2025-05-14 02:05:37.559851 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-14 02:05:37.562583 | orchestrator | Wednesday 14 May 2025 02:05:37 +0000 (0:00:01.121) 0:00:09.974 ********* 2025-05-14 02:05:38.682977 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCHu0cWaaONF2zS/oZkSpnP0DHhW9kFEiXGJlAAwfV9hJI0zb0eQs5sLMm3jhsptMy2Af9k4BfnEKYh0LGCpda3HFq3eIYf7SlgyG/ZnuMvHSIbd271ylmsoC0d2bLiFW2iASJh6WBm1e/+4bku4DeTvOLJJGOqz/Bzr8H3Z1rFOP0Aeif9eMFKwPyxH0Fg7rv1izOcbp+OBlQg3hxfqGIEiL1BBPC/uqPGwe+LFVvdLgzSt+4pEqgmNyUT//lrQv2Juex7rl4CCeFuL0pY+LZq/NB7VLo6lPguEJDk9VkPiGHJAWeaBpssQrUCTKyuJr0lnX0L3ufGaadvvjhNYrPWU83cw+XWcG88Shhd+hUWXuo1v8dbPgR+S63z3GSTHm09wqRdbmMf+Y2p+97h6YlwNgK++vcFbxzRGsUoArkiNxZyZAacX5GOgwbWoeCN/oyLALCQn0gEGojBptpeWaaycbkMbKqb4aNa7v5/ilV+jvN6fY4juVC1dh+V64JwhS0=) 2025-05-14 02:05:38.684168 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM0/aYFPj4gD7tDAgH1U6Ay/h8kAs2ky4FXYyZ3j71NbpqOSenxZP5mZ6E4YEVjGCOA9J7pvpgVtSXinDGrJC3o=) 2025-05-14 02:05:38.684687 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGHBpY9y40eS1vHFjNnUxwKzlVbeaG6KxmiiTYl7K9CH) 2025-05-14 02:05:38.684860 | orchestrator | 2025-05-14 02:05:38.685254 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-14 02:05:38.685694 | orchestrator | Wednesday 14 May 2025 02:05:38 +0000 (0:00:01.122) 0:00:11.097 ********* 2025-05-14 02:05:39.905047 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCvLn4GnxPtekND9NLRQZYqffq7hippqcELN0QXVuPxCjzr1zoYO6OA3WcwcdYLAU1dSAU/VlyRO9FkBf/N7ws8LTIWPScDqwUqboU5y+v+XknPccOKnPDzrQGmNiBH3q+rG4fbGKmzZwT82olbXlCLZDecTup8a1lIf4MN5RbN4fLUdC3Na9zXn9TmD1j91rF/XFdqO65pPRXsaKjey/gKqrYp9yZLlfqpIKavO7wLlwLT0o4a0Kcp+Vjk20gGrwgukMHGXe/N8xdQr/JY1lSS7dvAyxGh9VATHn3s5AvDXgfG0Vd85TFs9Eo7MH8YG6nZyjh8OBS8sZjkqyT19p7VWtx8gsZwhPqxLUG+0gkhYuZT57FN1jDeRPn/G2VkhH2Tm3W/pGGKSZo+9C+f004cnrHkPPlyYZqAblf30Q03CN/XjmWC6vxPuQWy5yVazdZgxYCxuR2f0TWRQm/QrgWCHOca+BP+7vaCBTJqDtv49POc6Iz4TnSTcxrqddf5q/8=) 2025-05-14 02:05:39.907510 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLzkq6qCKKvG9UavNviYSWBqrdTHnE6vV0xecPNz7ctM9zk9/DJuYYO5lZi4eg4UIYdn/rIiMLiVR/4QsALL6ik=) 2025-05-14 02:05:39.908010 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKsGMbw5Cxoex2vZR3nlgqZnttlAs7xtFb2u/jkqauS8) 2025-05-14 02:05:39.908904 | orchestrator | 2025-05-14 02:05:39.909598 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-14 02:05:39.910088 | orchestrator | Wednesday 14 May 2025 02:05:39 +0000 (0:00:01.223) 0:00:12.321 ********* 2025-05-14 02:05:40.984288 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCNvt33PThg9LuPP9c01ObFxShPUVb5mDjwWNH206iCxoOd+DLqD07mjsbu6qU3jzJ5EIVbCibcyMiiSyg2VHDICydR9qnMG+uLiu1A6iwAI/0KLWctZWIGer1yR0MALFNB1sZZYfAOsETd6neiNJZf0BkdtJn7phD7PtA9zODFWzzGWp5g6HGb+uNWnaPTWzgpOcAsBj1BNaljUeMMA6KugcFZUEQ0FtpLYE81BeutBc5MNwec8byNd+rdHeJUuZ7VceVJv4vu9B1oI7OsnVUGMqFUpkT1RE8RJ8bYhYMxGJpZfElC8i8RMikWGi8roet3Q4wdSsSSzDZ3wchc5ps2VZlAzjgiHNByp62YxsuNIizCPWPYIa26/S/hXDMpVxwAKgFO5lnr6XxKHnmaDoj0TzOd08OYdPEJpx+OYfXrHN6t50yg2ObMAzcq0cikYMi4R/kX9jim5/V0RrqmGGBQmOSW+g5Hi6xZ+hZlpONStlol3q9jDN7Ag/CdUstEN9c=) 2025-05-14 02:05:40.984867 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJPHjGyQAwFhS7DVh9pgoQagM1khgI6r21+8EwD40R2FHioc6ZuEbHm6xfocIIpqTTec3ltDXOqWHkIPdXF1dXY=) 2025-05-14 02:05:40.984949 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBAMsMFw71DnLh0d3wap5vUo1xqilzxchLKKRIxs5DOl) 2025-05-14 02:05:40.986075 | orchestrator | 2025-05-14 02:05:40.986713 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-14 02:05:40.986927 | orchestrator | Wednesday 14 May 2025 02:05:40 +0000 (0:00:01.079) 0:00:13.401 ********* 2025-05-14 02:05:42.044270 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDHa3CIFL/8YhZ3MRokwlc5AG6amZaDU1vYnzxrqakXd) 2025-05-14 02:05:42.044717 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC933na22NiEjTHZMB4wMK0ILLoWESfjIVTv4hxQz5IsU/vBXT1BUEdjPdgXhU1Y+my13sgJ5+9JP5EGVuozATKQJLdnITxNseBF//yeVWdjCEbXnCSLS89DItBGUmuITWahSAspuve8r1glTbDFLUohzaswO037eOSRsM8QmFtiPgnFizDczoYHfMJThvkwgmHlz89fMHwHmqxRIC2NvCxj4qauArdUNmfu4ZMnuz2Vxnij5HG0LxA0vsd+G6J01y/7i/PGjhDsqPXrT746miqTBCf876MdkizfYCTaa+nVfRYBXj+HXEiS0kRxk6FwgD8y1wQ4aNsnyBpyWThWWcU+I/Xlv4JMgMx7fvRfDxesOfjGCtvFfHxKRTxSBqtgMNjuU35zyYRMDUsnlGE3MQ5Rfy4xSzITcPOpXKpnJeVJ5bgOKCE9C1P15xYkwp2L7NjL+wqyQzy96cQFjQzITAll2uM59aukY8MrpV3+izYNbfL7/ozDDVoLq8JZLybObs=) 2025-05-14 02:05:42.045681 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDydDS8vHkVRVpM/XOLrUDFCYz7vrgLGwAaxRoGIDIHs1c9xiN006GEY7daPN/L1Cg04V7HLF+8txJ1pkfPyajw=) 2025-05-14 02:05:42.048338 | orchestrator | 2025-05-14 02:05:42.048732 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-05-14 02:05:42.051062 | orchestrator | Wednesday 14 May 2025 02:05:42 +0000 (0:00:01.059) 0:00:14.461 ********* 2025-05-14 02:05:47.537891 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-05-14 02:05:47.538137 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-05-14 02:05:47.538837 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-05-14 02:05:47.539511 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-05-14 02:05:47.540473 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-05-14 02:05:47.542347 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-05-14 02:05:47.543249 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-05-14 02:05:47.544060 | orchestrator | 2025-05-14 02:05:47.545119 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-05-14 02:05:47.546109 | orchestrator | Wednesday 14 May 2025 02:05:47 +0000 (0:00:05.491) 0:00:19.952 ********* 2025-05-14 02:05:47.706856 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-05-14 02:05:47.707804 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-05-14 02:05:47.708927 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-05-14 02:05:47.710114 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-05-14 02:05:47.710920 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-05-14 02:05:47.712056 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-05-14 02:05:47.712230 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-05-14 02:05:47.712706 | orchestrator | 2025-05-14 02:05:47.712955 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-14 02:05:47.713685 | orchestrator | Wednesday 14 May 2025 02:05:47 +0000 (0:00:00.172) 0:00:20.125 ********* 2025-05-14 02:05:48.778578 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC30WW10HFmc2okEm03Zz2GlQKHTM49Xy4blW9mRTU1fNMVMnzhB5+YTz5uKhWRbXPpzj4z1N5pZHabQpvxka8kJ4v9ApOgMs9tT/NocSGWuLD/Jrtk336DU/VFfxgHH27kEwdZ5QBh8zMoqKxsTTr2fBtYDapTBj6tfPhmlth1Fi+swlV4rTZNB5ZfaanPkr0XXGILTpkHNPj/tcvtXQrUi78/awGVySXG/oLmpk0sA9pUfywqnN4EcAdmzljjnzOoGMOgLn1d5+yh71GgBOfS05/5oMCRwudZqIEnpo5JL5Q/f66MR2ThlVYP8VpF55j4XYGmSp2VLBKB3QDAOC5RCANNuQ6E0HZCElCM01BkIO6T+gRTYTbqeRhY89mvh7iXYyDLxfAAtmvlfv+6u0MFcvmN3XOFuMaObMibXy6TMhjes+I19Cs0+McTSBKeQ1JllOlglqtwHJPkJv0s3SpOXVqw9n6+Qo12eu+RVaVlALS/9m1XEaMHOuTVaOr/Mts=) 2025-05-14 02:05:48.778708 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDUM/l382VHyRq47iHvNxeoqa0eZR1p6U+a48bBBTBjt0OLa8wdYpNuCLQCXTYM0j1uLMTYSwj8+JlqagM7Q/HY=) 2025-05-14 02:05:48.780031 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP74EA4FQAna2ONOruPOBeHytuHyCCaFm0RUTF567TvZ) 2025-05-14 02:05:48.781035 | orchestrator | 2025-05-14 02:05:48.781686 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-14 02:05:48.782244 | orchestrator | Wednesday 14 May 2025 02:05:48 +0000 (0:00:01.069) 0:00:21.195 ********* 2025-05-14 02:05:49.864970 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNqrHl9EfxFLB9PFxc0t4vPCn8osELcKNxC6LVLktNqJ6nmPl63oB5Qx+eJPPnA/PFlSg0XOE3K9Vv4OzGBpPd8=) 2025-05-14 02:05:49.865146 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCttL+JbeJci1WxOZD01HQtUwOGGml21GspYfF4AeWZaXW2g76262PEh90rCuK6noMtm2D/KlGpSx4rVLSWuSaqCRdCRJbyPaicM1JJe9iBT8tPwdTG4rkAKttHVtg4/8fyIBT7cmeL1IpDGpiaGIpIQ8+0XJCT6pF3q4peZn6qgyS1X2UC2cvgL+AkyykS71An58gZEatA2Japtj08P2pNDtW9TxUW+0u789txf4arFdh+e5PsWdTSi+5UFZhoRllT4fLngd8DmuiX6VZvEEZ1ygSvwNfquuN6eDDoHoJFEZLftg5kAc9MbA/0Pno28+BSQgqLzTMNQB/qF6LK/EEhltJ8W6tfa6Seup8KXcWenH9n/et05iQFWe+oOWxAvMQXCQ/X3xh8QAX39iM24Gmi4m1DcPOsbQeRVgmUClfyijfgcEO0NigurDRDB+B/T6QyM8GgDOlYR8mAdN+ZnnZoXb+w0CX5VebkCOcCOMDaCGhWEFrNWG02krlgVQLKtoc=) 2025-05-14 02:05:49.866574 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILzHWM0/pXgKG5idH0Ae1bVAwp0Cd/rLW7X5SXeSAYg+) 2025-05-14 02:05:49.867355 | orchestrator | 2025-05-14 02:05:49.868198 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-14 02:05:49.869015 | orchestrator | Wednesday 14 May 2025 02:05:49 +0000 (0:00:01.086) 0:00:22.281 ********* 2025-05-14 02:05:50.976020 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHAsCuff5+78WsOjyj2AReDNLp3vqmP3ssvMWAasHIlM) 2025-05-14 02:05:50.976773 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/pZOtrU32uOGXiQFRk6YRVIbEJfvW8UhF8FxEmKUVAviyC4MC56RhkVL8QNL1JI6jDqzCjWW9COmZ4ETdNvAfIpRrJ14ZXa/SadbKPV3AAG6q2xsrN4+XI80XLPXpBlObYiYAjt3OzxZmupEhkQw0N25g/VBzs8g5DdAp1jtfhBIdfWExraJG6ir3Ajvlc4aVo1YiWWsQcIFs6KWLXxrdcy60kNgSGfcmuC3AChYziqiCuUeiPPwdr4eFlL1buVLFB22E3EKdDKtOBGM72+lichBBEgP6NBF5Jab4zIDXwbQyfN++76Ye/EjqPcjr42mfMKJlUl+lu47rdE4DNTkuibty3rIs0kmtXwMVreTNOsl/9ODBNf+F1AXjSXLXe9wOiwphMd/WnIC2+Uz603ObZdRzWSttkNJELH/7nVHA62v5vt65YlAFgxoOFqFZRDKj98XhcmAmhpXRgGjLPJxqGe98+8nd1DLV3vGkDC0jIJehw0pECHsczgwf2L9Wb58=) 2025-05-14 02:05:50.977964 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIBCDIlFCpiy0pjBvzaUPgsWQ4qgdAwW91xm9INhCwUsdPdGaIzvHkGIGYmsmf98Td2HVy9pzTE4l5VXNg1dz94=) 2025-05-14 02:05:50.978816 | orchestrator | 2025-05-14 02:05:50.979281 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-14 02:05:50.979809 | orchestrator | Wednesday 14 May 2025 02:05:50 +0000 (0:00:01.111) 0:00:23.393 ********* 2025-05-14 02:05:52.138567 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGHBpY9y40eS1vHFjNnUxwKzlVbeaG6KxmiiTYl7K9CH) 2025-05-14 02:05:52.138702 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCHu0cWaaONF2zS/oZkSpnP0DHhW9kFEiXGJlAAwfV9hJI0zb0eQs5sLMm3jhsptMy2Af9k4BfnEKYh0LGCpda3HFq3eIYf7SlgyG/ZnuMvHSIbd271ylmsoC0d2bLiFW2iASJh6WBm1e/+4bku4DeTvOLJJGOqz/Bzr8H3Z1rFOP0Aeif9eMFKwPyxH0Fg7rv1izOcbp+OBlQg3hxfqGIEiL1BBPC/uqPGwe+LFVvdLgzSt+4pEqgmNyUT//lrQv2Juex7rl4CCeFuL0pY+LZq/NB7VLo6lPguEJDk9VkPiGHJAWeaBpssQrUCTKyuJr0lnX0L3ufGaadvvjhNYrPWU83cw+XWcG88Shhd+hUWXuo1v8dbPgR+S63z3GSTHm09wqRdbmMf+Y2p+97h6YlwNgK++vcFbxzRGsUoArkiNxZyZAacX5GOgwbWoeCN/oyLALCQn0gEGojBptpeWaaycbkMbKqb4aNa7v5/ilV+jvN6fY4juVC1dh+V64JwhS0=) 2025-05-14 02:05:52.138796 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM0/aYFPj4gD7tDAgH1U6Ay/h8kAs2ky4FXYyZ3j71NbpqOSenxZP5mZ6E4YEVjGCOA9J7pvpgVtSXinDGrJC3o=) 2025-05-14 02:05:52.138977 | orchestrator | 2025-05-14 02:05:52.142569 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-14 02:05:52.142768 | orchestrator | Wednesday 14 May 2025 02:05:52 +0000 (0:00:01.161) 0:00:24.554 ********* 2025-05-14 02:05:53.251005 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLzkq6qCKKvG9UavNviYSWBqrdTHnE6vV0xecPNz7ctM9zk9/DJuYYO5lZi4eg4UIYdn/rIiMLiVR/4QsALL6ik=) 2025-05-14 02:05:53.254173 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCvLn4GnxPtekND9NLRQZYqffq7hippqcELN0QXVuPxCjzr1zoYO6OA3WcwcdYLAU1dSAU/VlyRO9FkBf/N7ws8LTIWPScDqwUqboU5y+v+XknPccOKnPDzrQGmNiBH3q+rG4fbGKmzZwT82olbXlCLZDecTup8a1lIf4MN5RbN4fLUdC3Na9zXn9TmD1j91rF/XFdqO65pPRXsaKjey/gKqrYp9yZLlfqpIKavO7wLlwLT0o4a0Kcp+Vjk20gGrwgukMHGXe/N8xdQr/JY1lSS7dvAyxGh9VATHn3s5AvDXgfG0Vd85TFs9Eo7MH8YG6nZyjh8OBS8sZjkqyT19p7VWtx8gsZwhPqxLUG+0gkhYuZT57FN1jDeRPn/G2VkhH2Tm3W/pGGKSZo+9C+f004cnrHkPPlyYZqAblf30Q03CN/XjmWC6vxPuQWy5yVazdZgxYCxuR2f0TWRQm/QrgWCHOca+BP+7vaCBTJqDtv49POc6Iz4TnSTcxrqddf5q/8=) 2025-05-14 02:05:53.255286 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKsGMbw5Cxoex2vZR3nlgqZnttlAs7xtFb2u/jkqauS8) 2025-05-14 02:05:53.256015 | orchestrator | 2025-05-14 02:05:53.256201 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-14 02:05:53.257028 | orchestrator | Wednesday 14 May 2025 02:05:53 +0000 (0:00:01.112) 0:00:25.667 ********* 2025-05-14 02:05:54.374868 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJPHjGyQAwFhS7DVh9pgoQagM1khgI6r21+8EwD40R2FHioc6ZuEbHm6xfocIIpqTTec3ltDXOqWHkIPdXF1dXY=) 2025-05-14 02:05:54.375494 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCNvt33PThg9LuPP9c01ObFxShPUVb5mDjwWNH206iCxoOd+DLqD07mjsbu6qU3jzJ5EIVbCibcyMiiSyg2VHDICydR9qnMG+uLiu1A6iwAI/0KLWctZWIGer1yR0MALFNB1sZZYfAOsETd6neiNJZf0BkdtJn7phD7PtA9zODFWzzGWp5g6HGb+uNWnaPTWzgpOcAsBj1BNaljUeMMA6KugcFZUEQ0FtpLYE81BeutBc5MNwec8byNd+rdHeJUuZ7VceVJv4vu9B1oI7OsnVUGMqFUpkT1RE8RJ8bYhYMxGJpZfElC8i8RMikWGi8roet3Q4wdSsSSzDZ3wchc5ps2VZlAzjgiHNByp62YxsuNIizCPWPYIa26/S/hXDMpVxwAKgFO5lnr6XxKHnmaDoj0TzOd08OYdPEJpx+OYfXrHN6t50yg2ObMAzcq0cikYMi4R/kX9jim5/V0RrqmGGBQmOSW+g5Hi6xZ+hZlpONStlol3q9jDN7Ag/CdUstEN9c=) 2025-05-14 02:05:54.376306 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBAMsMFw71DnLh0d3wap5vUo1xqilzxchLKKRIxs5DOl) 2025-05-14 02:05:54.376725 | orchestrator | 2025-05-14 02:05:54.377659 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-14 02:05:54.378498 | orchestrator | Wednesday 14 May 2025 02:05:54 +0000 (0:00:01.125) 0:00:26.792 ********* 2025-05-14 02:05:55.494085 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC933na22NiEjTHZMB4wMK0ILLoWESfjIVTv4hxQz5IsU/vBXT1BUEdjPdgXhU1Y+my13sgJ5+9JP5EGVuozATKQJLdnITxNseBF//yeVWdjCEbXnCSLS89DItBGUmuITWahSAspuve8r1glTbDFLUohzaswO037eOSRsM8QmFtiPgnFizDczoYHfMJThvkwgmHlz89fMHwHmqxRIC2NvCxj4qauArdUNmfu4ZMnuz2Vxnij5HG0LxA0vsd+G6J01y/7i/PGjhDsqPXrT746miqTBCf876MdkizfYCTaa+nVfRYBXj+HXEiS0kRxk6FwgD8y1wQ4aNsnyBpyWThWWcU+I/Xlv4JMgMx7fvRfDxesOfjGCtvFfHxKRTxSBqtgMNjuU35zyYRMDUsnlGE3MQ5Rfy4xSzITcPOpXKpnJeVJ5bgOKCE9C1P15xYkwp2L7NjL+wqyQzy96cQFjQzITAll2uM59aukY8MrpV3+izYNbfL7/ozDDVoLq8JZLybObs=) 2025-05-14 02:05:55.495028 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDydDS8vHkVRVpM/XOLrUDFCYz7vrgLGwAaxRoGIDIHs1c9xiN006GEY7daPN/L1Cg04V7HLF+8txJ1pkfPyajw=) 2025-05-14 02:05:55.495068 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDHa3CIFL/8YhZ3MRokwlc5AG6amZaDU1vYnzxrqakXd) 2025-05-14 02:05:55.495085 | orchestrator | 2025-05-14 02:05:55.495100 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-05-14 02:05:55.495130 | orchestrator | Wednesday 14 May 2025 02:05:55 +0000 (0:00:01.111) 0:00:27.904 ********* 2025-05-14 02:05:55.643810 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-14 02:05:55.644273 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-14 02:05:55.645739 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-14 02:05:55.646361 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-14 02:05:55.646826 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-14 02:05:55.647496 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-14 02:05:55.647811 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-14 02:05:55.648276 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:05:55.648761 | orchestrator | 2025-05-14 02:05:55.649176 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-05-14 02:05:55.649600 | orchestrator | Wednesday 14 May 2025 02:05:55 +0000 (0:00:00.158) 0:00:28.062 ********* 2025-05-14 02:05:55.710013 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:05:55.710166 | orchestrator | 2025-05-14 02:05:55.710184 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-05-14 02:05:55.710230 | orchestrator | Wednesday 14 May 2025 02:05:55 +0000 (0:00:00.064) 0:00:28.127 ********* 2025-05-14 02:05:55.774254 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:05:55.774487 | orchestrator | 2025-05-14 02:05:55.774877 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-05-14 02:05:55.775443 | orchestrator | Wednesday 14 May 2025 02:05:55 +0000 (0:00:00.066) 0:00:28.193 ********* 2025-05-14 02:05:56.516179 | orchestrator | changed: [testbed-manager] 2025-05-14 02:05:56.516388 | orchestrator | 2025-05-14 02:05:56.518381 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:05:56.527733 | orchestrator | 2025-05-14 02:05:56 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 02:05:56.527779 | orchestrator | 2025-05-14 02:05:56 | INFO  | Please wait and do not abort execution. 2025-05-14 02:05:56.528448 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 02:05:56.529099 | orchestrator | 2025-05-14 02:05:56.529111 | orchestrator | Wednesday 14 May 2025 02:05:56 +0000 (0:00:00.740) 0:00:28.933 ********* 2025-05-14 02:05:56.529611 | orchestrator | =============================================================================== 2025-05-14 02:05:56.530110 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.14s 2025-05-14 02:05:56.530567 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.49s 2025-05-14 02:05:56.530579 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.29s 2025-05-14 02:05:56.530791 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.22s 2025-05-14 02:05:56.531108 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2025-05-14 02:05:56.531964 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-05-14 02:05:56.531984 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2025-05-14 02:05:56.531990 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-05-14 02:05:56.532272 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-05-14 02:05:56.532281 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-05-14 02:05:56.533504 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-05-14 02:05:56.534127 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-05-14 02:05:56.534671 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-05-14 02:05:56.534833 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-05-14 02:05:56.535843 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-05-14 02:05:56.536608 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-05-14 02:05:56.536678 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.74s 2025-05-14 02:05:56.537892 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2025-05-14 02:05:56.538190 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2025-05-14 02:05:56.538240 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2025-05-14 02:05:56.944893 | orchestrator | + osism apply squid 2025-05-14 02:05:58.406615 | orchestrator | 2025-05-14 02:05:58 | INFO  | Task 0839cf7e-9305-438c-81bd-4d02105aef04 (squid) was prepared for execution. 2025-05-14 02:05:58.406739 | orchestrator | 2025-05-14 02:05:58 | INFO  | It takes a moment until task 0839cf7e-9305-438c-81bd-4d02105aef04 (squid) has been started and output is visible here. 2025-05-14 02:06:01.565989 | orchestrator | 2025-05-14 02:06:01.568310 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-05-14 02:06:01.568346 | orchestrator | 2025-05-14 02:06:01.570848 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-05-14 02:06:01.571854 | orchestrator | Wednesday 14 May 2025 02:06:01 +0000 (0:00:00.111) 0:00:00.111 ********* 2025-05-14 02:06:01.655092 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-05-14 02:06:01.655682 | orchestrator | 2025-05-14 02:06:01.656022 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-05-14 02:06:01.656592 | orchestrator | Wednesday 14 May 2025 02:06:01 +0000 (0:00:00.091) 0:00:00.203 ********* 2025-05-14 02:06:03.163235 | orchestrator | ok: [testbed-manager] 2025-05-14 02:06:03.163475 | orchestrator | 2025-05-14 02:06:03.163850 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-05-14 02:06:03.164068 | orchestrator | Wednesday 14 May 2025 02:06:03 +0000 (0:00:01.505) 0:00:01.708 ********* 2025-05-14 02:06:04.332139 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-05-14 02:06:04.332294 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-05-14 02:06:04.332377 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-05-14 02:06:04.332811 | orchestrator | 2025-05-14 02:06:04.332836 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-05-14 02:06:04.332932 | orchestrator | Wednesday 14 May 2025 02:06:04 +0000 (0:00:01.171) 0:00:02.880 ********* 2025-05-14 02:06:05.246956 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-05-14 02:06:05.247487 | orchestrator | 2025-05-14 02:06:05.249078 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-05-14 02:06:05.250110 | orchestrator | Wednesday 14 May 2025 02:06:05 +0000 (0:00:00.914) 0:00:03.794 ********* 2025-05-14 02:06:05.589331 | orchestrator | ok: [testbed-manager] 2025-05-14 02:06:05.589613 | orchestrator | 2025-05-14 02:06:05.590199 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-05-14 02:06:05.590838 | orchestrator | Wednesday 14 May 2025 02:06:05 +0000 (0:00:00.343) 0:00:04.137 ********* 2025-05-14 02:06:06.456810 | orchestrator | changed: [testbed-manager] 2025-05-14 02:06:06.456969 | orchestrator | 2025-05-14 02:06:06.457051 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-05-14 02:06:06.457070 | orchestrator | Wednesday 14 May 2025 02:06:06 +0000 (0:00:00.867) 0:00:05.005 ********* 2025-05-14 02:06:38.469753 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-05-14 02:06:38.469904 | orchestrator | ok: [testbed-manager] 2025-05-14 02:06:38.469921 | orchestrator | 2025-05-14 02:06:38.469934 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-05-14 02:06:38.469947 | orchestrator | Wednesday 14 May 2025 02:06:38 +0000 (0:00:32.005) 0:00:37.010 ********* 2025-05-14 02:06:50.980118 | orchestrator | changed: [testbed-manager] 2025-05-14 02:06:50.980243 | orchestrator | 2025-05-14 02:06:50.980262 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-05-14 02:06:50.980276 | orchestrator | Wednesday 14 May 2025 02:06:50 +0000 (0:00:12.513) 0:00:49.524 ********* 2025-05-14 02:07:51.066739 | orchestrator | Pausing for 60 seconds 2025-05-14 02:07:51.066853 | orchestrator | changed: [testbed-manager] 2025-05-14 02:07:51.066871 | orchestrator | 2025-05-14 02:07:51.066977 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-05-14 02:07:51.066993 | orchestrator | Wednesday 14 May 2025 02:07:51 +0000 (0:01:00.085) 0:01:49.610 ********* 2025-05-14 02:07:51.122460 | orchestrator | ok: [testbed-manager] 2025-05-14 02:07:51.123658 | orchestrator | 2025-05-14 02:07:51.125150 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-05-14 02:07:51.126109 | orchestrator | Wednesday 14 May 2025 02:07:51 +0000 (0:00:00.059) 0:01:49.670 ********* 2025-05-14 02:07:51.755120 | orchestrator | changed: [testbed-manager] 2025-05-14 02:07:51.755276 | orchestrator | 2025-05-14 02:07:51.756655 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:07:51.756679 | orchestrator | 2025-05-14 02:07:51 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 02:07:51.756687 | orchestrator | 2025-05-14 02:07:51 | INFO  | Please wait and do not abort execution. 2025-05-14 02:07:51.757033 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:07:51.757676 | orchestrator | 2025-05-14 02:07:51.758113 | orchestrator | Wednesday 14 May 2025 02:07:51 +0000 (0:00:00.632) 0:01:50.302 ********* 2025-05-14 02:07:51.759649 | orchestrator | =============================================================================== 2025-05-14 02:07:51.760558 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2025-05-14 02:07:51.761741 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 32.01s 2025-05-14 02:07:51.762724 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.51s 2025-05-14 02:07:51.763199 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.51s 2025-05-14 02:07:51.764422 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.17s 2025-05-14 02:07:51.764702 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 0.91s 2025-05-14 02:07:51.765583 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.87s 2025-05-14 02:07:51.766054 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.63s 2025-05-14 02:07:51.766359 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.34s 2025-05-14 02:07:51.766973 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2025-05-14 02:07:51.767264 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2025-05-14 02:07:52.352773 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-14 02:07:52.352874 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-05-14 02:07:52.357474 | orchestrator | ++ semver 8.1.0 9.0.0 2025-05-14 02:07:52.406864 | orchestrator | + [[ -1 -lt 0 ]] 2025-05-14 02:07:52.406909 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-14 02:07:52.406922 | orchestrator | + sed -i 's|^# \(network_dispatcher_scripts:\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml 2025-05-14 02:07:52.411599 | orchestrator | + sed -i 's|^# \( - src: /opt/configuration/network/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-05-14 02:07:52.418601 | orchestrator | + sed -i 's|^# \( dest: routable.d/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-05-14 02:07:52.424376 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-05-14 02:07:53.986571 | orchestrator | 2025-05-14 02:07:53 | INFO  | Task f6962a4a-af7d-4ca8-990e-5a1d4f894d31 (operator) was prepared for execution. 2025-05-14 02:07:53.986677 | orchestrator | 2025-05-14 02:07:53 | INFO  | It takes a moment until task f6962a4a-af7d-4ca8-990e-5a1d4f894d31 (operator) has been started and output is visible here. 2025-05-14 02:07:57.252326 | orchestrator | 2025-05-14 02:07:57.252467 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-05-14 02:07:57.252486 | orchestrator | 2025-05-14 02:07:57.252696 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-14 02:07:57.252791 | orchestrator | Wednesday 14 May 2025 02:07:57 +0000 (0:00:00.097) 0:00:00.097 ********* 2025-05-14 02:08:00.646277 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:08:00.646384 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:08:00.649036 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:08:00.649066 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:08:00.649112 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:08:00.649123 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:08:00.649135 | orchestrator | 2025-05-14 02:08:00.649148 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-05-14 02:08:00.649203 | orchestrator | Wednesday 14 May 2025 02:08:00 +0000 (0:00:03.392) 0:00:03.490 ********* 2025-05-14 02:08:01.466689 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:08:01.468027 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:08:01.470859 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:08:01.470894 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:08:01.471298 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:08:01.475881 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:08:01.482916 | orchestrator | 2025-05-14 02:08:01.483782 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-05-14 02:08:01.484473 | orchestrator | 2025-05-14 02:08:01.485219 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-14 02:08:01.486140 | orchestrator | Wednesday 14 May 2025 02:08:01 +0000 (0:00:00.822) 0:00:04.312 ********* 2025-05-14 02:08:01.542592 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:08:01.589223 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:08:01.618884 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:08:01.676054 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:08:01.676200 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:08:01.676887 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:08:01.677669 | orchestrator | 2025-05-14 02:08:01.678010 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-14 02:08:01.678642 | orchestrator | Wednesday 14 May 2025 02:08:01 +0000 (0:00:00.210) 0:00:04.522 ********* 2025-05-14 02:08:01.758331 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:08:01.786197 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:08:01.813943 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:08:01.872156 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:08:01.873181 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:08:01.876714 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:08:01.876740 | orchestrator | 2025-05-14 02:08:01.876753 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-14 02:08:01.877084 | orchestrator | Wednesday 14 May 2025 02:08:01 +0000 (0:00:00.194) 0:00:04.717 ********* 2025-05-14 02:08:02.489682 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:08:02.489829 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:08:02.493848 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:08:02.493876 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:08:02.493888 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:08:02.493901 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:08:02.493913 | orchestrator | 2025-05-14 02:08:02.493926 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-14 02:08:02.494348 | orchestrator | Wednesday 14 May 2025 02:08:02 +0000 (0:00:00.618) 0:00:05.336 ********* 2025-05-14 02:08:03.392753 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:08:03.392875 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:08:03.395842 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:08:03.395893 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:08:03.395907 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:08:03.396014 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:08:03.396935 | orchestrator | 2025-05-14 02:08:03.398278 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-14 02:08:03.398347 | orchestrator | Wednesday 14 May 2025 02:08:03 +0000 (0:00:00.899) 0:00:06.235 ********* 2025-05-14 02:08:04.557329 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-05-14 02:08:04.557997 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-05-14 02:08:04.559839 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-05-14 02:08:04.562696 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-05-14 02:08:04.562724 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-05-14 02:08:04.562737 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-05-14 02:08:04.563657 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-05-14 02:08:04.564580 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-05-14 02:08:04.565737 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-05-14 02:08:04.566647 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-05-14 02:08:04.569905 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-05-14 02:08:04.570089 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-05-14 02:08:04.570554 | orchestrator | 2025-05-14 02:08:04.570769 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-14 02:08:04.573867 | orchestrator | Wednesday 14 May 2025 02:08:04 +0000 (0:00:01.160) 0:00:07.396 ********* 2025-05-14 02:08:05.864646 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:08:05.864747 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:08:05.864762 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:08:05.865294 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:08:05.865629 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:08:05.867009 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:08:05.867218 | orchestrator | 2025-05-14 02:08:05.868133 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-14 02:08:05.868642 | orchestrator | Wednesday 14 May 2025 02:08:05 +0000 (0:00:01.311) 0:00:08.707 ********* 2025-05-14 02:08:07.094962 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-05-14 02:08:07.095825 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-05-14 02:08:07.099409 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-05-14 02:08:07.166667 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-05-14 02:08:07.168650 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-05-14 02:08:07.172001 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-05-14 02:08:07.172627 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-05-14 02:08:07.173337 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-05-14 02:08:07.174221 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-05-14 02:08:07.175941 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-05-14 02:08:07.176029 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-05-14 02:08:07.176098 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-05-14 02:08:07.176382 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-05-14 02:08:07.177418 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-05-14 02:08:07.178302 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-05-14 02:08:07.178798 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-05-14 02:08:07.179841 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-05-14 02:08:07.180928 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-05-14 02:08:07.181250 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-05-14 02:08:07.181600 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-05-14 02:08:07.182079 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-05-14 02:08:07.182567 | orchestrator | 2025-05-14 02:08:07.182682 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-14 02:08:07.182996 | orchestrator | Wednesday 14 May 2025 02:08:07 +0000 (0:00:01.303) 0:00:10.010 ********* 2025-05-14 02:08:07.711685 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:08:07.711808 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:08:07.712321 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:08:07.713961 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:08:07.715499 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:08:07.717953 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:08:07.718392 | orchestrator | 2025-05-14 02:08:07.719259 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-14 02:08:07.719795 | orchestrator | Wednesday 14 May 2025 02:08:07 +0000 (0:00:00.544) 0:00:10.555 ********* 2025-05-14 02:08:07.768875 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:08:07.802893 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:08:07.819419 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:08:07.862653 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:08:07.862719 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:08:07.862732 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:08:07.862829 | orchestrator | 2025-05-14 02:08:07.863295 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-14 02:08:07.863598 | orchestrator | Wednesday 14 May 2025 02:08:07 +0000 (0:00:00.154) 0:00:10.710 ********* 2025-05-14 02:08:08.518155 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-14 02:08:08.518249 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:08:08.518266 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-14 02:08:08.518278 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-05-14 02:08:08.518290 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:08:08.519932 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:08:08.519963 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-14 02:08:08.520027 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:08:08.520284 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-05-14 02:08:08.520581 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-14 02:08:08.521478 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:08:08.522430 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:08:08.522591 | orchestrator | 2025-05-14 02:08:08.525831 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-14 02:08:08.525898 | orchestrator | Wednesday 14 May 2025 02:08:08 +0000 (0:00:00.653) 0:00:11.363 ********* 2025-05-14 02:08:08.553404 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:08:08.593016 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:08:08.614757 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:08:08.653471 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:08:08.654701 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:08:08.655379 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:08:08.656461 | orchestrator | 2025-05-14 02:08:08.657218 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-14 02:08:08.658098 | orchestrator | Wednesday 14 May 2025 02:08:08 +0000 (0:00:00.137) 0:00:11.500 ********* 2025-05-14 02:08:08.701143 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:08:08.721501 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:08:08.740148 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:08:08.758706 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:08:08.782832 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:08:08.786933 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:08:08.787907 | orchestrator | 2025-05-14 02:08:08.789048 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-14 02:08:08.789682 | orchestrator | Wednesday 14 May 2025 02:08:08 +0000 (0:00:00.129) 0:00:11.630 ********* 2025-05-14 02:08:08.863619 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:08:08.887893 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:08:08.922706 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:08:08.929595 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:08:08.929634 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:08:08.929814 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:08:08.930250 | orchestrator | 2025-05-14 02:08:08.930829 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-14 02:08:08.931817 | orchestrator | Wednesday 14 May 2025 02:08:08 +0000 (0:00:00.146) 0:00:11.777 ********* 2025-05-14 02:08:09.645053 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:08:09.645121 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:08:09.645136 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:08:09.645147 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:08:09.645159 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:08:09.645229 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:08:09.645244 | orchestrator | 2025-05-14 02:08:09.645257 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-14 02:08:09.645276 | orchestrator | Wednesday 14 May 2025 02:08:09 +0000 (0:00:00.692) 0:00:12.469 ********* 2025-05-14 02:08:09.727944 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:08:09.752118 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:08:09.851239 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:08:09.852080 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:08:09.853428 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:08:09.853994 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:08:09.854924 | orchestrator | 2025-05-14 02:08:09.856287 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:08:09.858281 | orchestrator | 2025-05-14 02:08:09 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 02:08:09.859408 | orchestrator | 2025-05-14 02:08:09 | INFO  | Please wait and do not abort execution. 2025-05-14 02:08:09.860679 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-14 02:08:09.865090 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-14 02:08:09.865892 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-14 02:08:09.867089 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-14 02:08:09.868100 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-14 02:08:09.868834 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-14 02:08:09.869365 | orchestrator | 2025-05-14 02:08:09.871095 | orchestrator | Wednesday 14 May 2025 02:08:09 +0000 (0:00:00.228) 0:00:12.698 ********* 2025-05-14 02:08:09.871613 | orchestrator | =============================================================================== 2025-05-14 02:08:09.873166 | orchestrator | Gathering Facts --------------------------------------------------------- 3.39s 2025-05-14 02:08:09.873193 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.31s 2025-05-14 02:08:09.874080 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.30s 2025-05-14 02:08:09.874828 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.16s 2025-05-14 02:08:09.875591 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.90s 2025-05-14 02:08:09.878403 | orchestrator | Do not require tty for all users ---------------------------------------- 0.82s 2025-05-14 02:08:09.878427 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.69s 2025-05-14 02:08:09.878439 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.65s 2025-05-14 02:08:09.878450 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.62s 2025-05-14 02:08:09.878476 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.54s 2025-05-14 02:08:09.878488 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.23s 2025-05-14 02:08:09.878557 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.21s 2025-05-14 02:08:09.878579 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.19s 2025-05-14 02:08:09.878667 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.15s 2025-05-14 02:08:09.879156 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.15s 2025-05-14 02:08:09.879860 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.14s 2025-05-14 02:08:09.880147 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.13s 2025-05-14 02:08:10.410942 | orchestrator | + osism apply --environment custom facts 2025-05-14 02:08:12.028372 | orchestrator | 2025-05-14 02:08:12 | INFO  | Trying to run play facts in environment custom 2025-05-14 02:08:12.096322 | orchestrator | 2025-05-14 02:08:12 | INFO  | Task 13b091c4-59de-49f1-b971-dc1ed183516d (facts) was prepared for execution. 2025-05-14 02:08:12.096413 | orchestrator | 2025-05-14 02:08:12 | INFO  | It takes a moment until task 13b091c4-59de-49f1-b971-dc1ed183516d (facts) has been started and output is visible here. 2025-05-14 02:08:15.530273 | orchestrator | 2025-05-14 02:08:15.535232 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-05-14 02:08:15.536054 | orchestrator | 2025-05-14 02:08:15.537106 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-14 02:08:15.537472 | orchestrator | Wednesday 14 May 2025 02:08:15 +0000 (0:00:00.100) 0:00:00.100 ********* 2025-05-14 02:08:16.846856 | orchestrator | ok: [testbed-manager] 2025-05-14 02:08:17.967342 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:08:17.970951 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:08:17.971702 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:08:17.972679 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:08:17.974283 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:08:17.975192 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:08:17.976783 | orchestrator | 2025-05-14 02:08:17.977279 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-05-14 02:08:17.978509 | orchestrator | Wednesday 14 May 2025 02:08:17 +0000 (0:00:02.442) 0:00:02.543 ********* 2025-05-14 02:08:19.200278 | orchestrator | ok: [testbed-manager] 2025-05-14 02:08:20.104383 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:08:20.107936 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:08:20.109885 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:08:20.112376 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:08:20.113373 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:08:20.115720 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:08:20.116583 | orchestrator | 2025-05-14 02:08:20.117595 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-05-14 02:08:20.118939 | orchestrator | 2025-05-14 02:08:20.119608 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-14 02:08:20.120670 | orchestrator | Wednesday 14 May 2025 02:08:20 +0000 (0:00:02.136) 0:00:04.679 ********* 2025-05-14 02:08:20.187450 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:08:20.274215 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:08:20.275267 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:08:20.276167 | orchestrator | 2025-05-14 02:08:20.276945 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-14 02:08:20.279175 | orchestrator | Wednesday 14 May 2025 02:08:20 +0000 (0:00:00.172) 0:00:04.851 ********* 2025-05-14 02:08:20.427835 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:08:20.427915 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:08:20.427966 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:08:20.428639 | orchestrator | 2025-05-14 02:08:20.428999 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-14 02:08:20.429701 | orchestrator | Wednesday 14 May 2025 02:08:20 +0000 (0:00:00.152) 0:00:05.004 ********* 2025-05-14 02:08:20.581888 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:08:20.582935 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:08:20.583045 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:08:20.584364 | orchestrator | 2025-05-14 02:08:20.585093 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-14 02:08:20.586309 | orchestrator | Wednesday 14 May 2025 02:08:20 +0000 (0:00:00.149) 0:00:05.154 ********* 2025-05-14 02:08:20.745924 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:08:20.747091 | orchestrator | 2025-05-14 02:08:20.750591 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-14 02:08:20.750628 | orchestrator | Wednesday 14 May 2025 02:08:20 +0000 (0:00:00.169) 0:00:05.323 ********* 2025-05-14 02:08:21.197194 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:08:21.199376 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:08:21.203044 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:08:21.203510 | orchestrator | 2025-05-14 02:08:21.203987 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-14 02:08:21.204496 | orchestrator | Wednesday 14 May 2025 02:08:21 +0000 (0:00:00.451) 0:00:05.775 ********* 2025-05-14 02:08:21.327194 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:08:21.327304 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:08:21.328134 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:08:21.332342 | orchestrator | 2025-05-14 02:08:21.332827 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-14 02:08:21.333065 | orchestrator | Wednesday 14 May 2025 02:08:21 +0000 (0:00:00.128) 0:00:05.904 ********* 2025-05-14 02:08:22.375612 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:08:22.376352 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:08:22.376950 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:08:22.378165 | orchestrator | 2025-05-14 02:08:22.379131 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-14 02:08:22.379798 | orchestrator | Wednesday 14 May 2025 02:08:22 +0000 (0:00:01.049) 0:00:06.953 ********* 2025-05-14 02:08:22.867774 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:08:22.870143 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:08:22.870319 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:08:22.870420 | orchestrator | 2025-05-14 02:08:22.870901 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-14 02:08:22.871654 | orchestrator | Wednesday 14 May 2025 02:08:22 +0000 (0:00:00.487) 0:00:07.441 ********* 2025-05-14 02:08:23.980488 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:08:23.980781 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:08:23.980811 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:08:23.980846 | orchestrator | 2025-05-14 02:08:23.980961 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-14 02:08:23.980981 | orchestrator | Wednesday 14 May 2025 02:08:23 +0000 (0:00:01.113) 0:00:08.555 ********* 2025-05-14 02:08:37.571615 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:08:37.576211 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:08:37.576442 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:08:37.577088 | orchestrator | 2025-05-14 02:08:37.577623 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-05-14 02:08:37.578098 | orchestrator | Wednesday 14 May 2025 02:08:37 +0000 (0:00:13.584) 0:00:22.140 ********* 2025-05-14 02:08:37.646002 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:08:37.706959 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:08:37.708469 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:08:37.708500 | orchestrator | 2025-05-14 02:08:37.708513 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-05-14 02:08:37.711155 | orchestrator | Wednesday 14 May 2025 02:08:37 +0000 (0:00:00.139) 0:00:22.280 ********* 2025-05-14 02:08:45.087137 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:08:45.087840 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:08:45.089425 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:08:45.091826 | orchestrator | 2025-05-14 02:08:45.091947 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-14 02:08:45.092256 | orchestrator | Wednesday 14 May 2025 02:08:45 +0000 (0:00:07.384) 0:00:29.665 ********* 2025-05-14 02:08:45.514969 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:08:45.515095 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:08:45.516143 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:08:45.516970 | orchestrator | 2025-05-14 02:08:45.517687 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-14 02:08:45.518149 | orchestrator | Wednesday 14 May 2025 02:08:45 +0000 (0:00:00.425) 0:00:30.090 ********* 2025-05-14 02:08:49.006330 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-05-14 02:08:49.007651 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-05-14 02:08:49.007760 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-05-14 02:08:49.010010 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-05-14 02:08:49.010627 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-05-14 02:08:49.011090 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-05-14 02:08:49.011973 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-05-14 02:08:49.012936 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-05-14 02:08:49.013771 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-05-14 02:08:49.014310 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-05-14 02:08:49.014910 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-05-14 02:08:49.015548 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-05-14 02:08:49.016134 | orchestrator | 2025-05-14 02:08:49.016792 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-14 02:08:49.018124 | orchestrator | Wednesday 14 May 2025 02:08:48 +0000 (0:00:03.491) 0:00:33.582 ********* 2025-05-14 02:08:50.110114 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:08:50.110904 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:08:50.111820 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:08:50.113005 | orchestrator | 2025-05-14 02:08:50.113663 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-14 02:08:50.114646 | orchestrator | 2025-05-14 02:08:50.116124 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-14 02:08:50.116833 | orchestrator | Wednesday 14 May 2025 02:08:50 +0000 (0:00:01.104) 0:00:34.687 ********* 2025-05-14 02:08:51.886170 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:08:55.065251 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:08:55.065904 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:08:55.067725 | orchestrator | ok: [testbed-manager] 2025-05-14 02:08:55.069240 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:08:55.070100 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:08:55.070928 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:08:55.071828 | orchestrator | 2025-05-14 02:08:55.073807 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:08:55.073851 | orchestrator | 2025-05-14 02:08:55 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 02:08:55.073866 | orchestrator | 2025-05-14 02:08:55 | INFO  | Please wait and do not abort execution. 2025-05-14 02:08:55.074173 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:08:55.074787 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:08:55.075176 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:08:55.075774 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:08:55.076133 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:08:55.076762 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:08:55.076972 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:08:55.077309 | orchestrator | 2025-05-14 02:08:55.078056 | orchestrator | Wednesday 14 May 2025 02:08:55 +0000 (0:00:04.956) 0:00:39.643 ********* 2025-05-14 02:08:55.078300 | orchestrator | =============================================================================== 2025-05-14 02:08:55.078683 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.58s 2025-05-14 02:08:55.079150 | orchestrator | Install required packages (Debian) -------------------------------------- 7.38s 2025-05-14 02:08:55.079526 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.96s 2025-05-14 02:08:55.079917 | orchestrator | Copy fact files --------------------------------------------------------- 3.49s 2025-05-14 02:08:55.080104 | orchestrator | Create custom facts directory ------------------------------------------- 2.44s 2025-05-14 02:08:55.080476 | orchestrator | Copy fact file ---------------------------------------------------------- 2.14s 2025-05-14 02:08:55.080690 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.11s 2025-05-14 02:08:55.080980 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.10s 2025-05-14 02:08:55.081290 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.05s 2025-05-14 02:08:55.081700 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.49s 2025-05-14 02:08:55.081906 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.45s 2025-05-14 02:08:55.082155 | orchestrator | Create custom facts directory ------------------------------------------- 0.43s 2025-05-14 02:08:55.082669 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.17s 2025-05-14 02:08:55.082891 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.17s 2025-05-14 02:08:55.083136 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.15s 2025-05-14 02:08:55.083431 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.15s 2025-05-14 02:08:55.083932 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.14s 2025-05-14 02:08:55.084156 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.13s 2025-05-14 02:08:55.743747 | orchestrator | + osism apply bootstrap 2025-05-14 02:08:57.263715 | orchestrator | 2025-05-14 02:08:57 | INFO  | Task d80f5da3-a991-4cbd-bfdb-a9148b88e3a7 (bootstrap) was prepared for execution. 2025-05-14 02:08:57.263811 | orchestrator | 2025-05-14 02:08:57 | INFO  | It takes a moment until task d80f5da3-a991-4cbd-bfdb-a9148b88e3a7 (bootstrap) has been started and output is visible here. 2025-05-14 02:09:00.776465 | orchestrator | 2025-05-14 02:09:00.776693 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-05-14 02:09:00.777984 | orchestrator | 2025-05-14 02:09:00.779171 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-05-14 02:09:00.780025 | orchestrator | Wednesday 14 May 2025 02:09:00 +0000 (0:00:00.119) 0:00:00.119 ********* 2025-05-14 02:09:00.861895 | orchestrator | ok: [testbed-manager] 2025-05-14 02:09:00.895005 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:09:00.929389 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:09:00.960962 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:09:01.062783 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:09:01.063427 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:09:01.064496 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:09:01.065437 | orchestrator | 2025-05-14 02:09:01.068906 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-14 02:09:01.068973 | orchestrator | 2025-05-14 02:09:01.068986 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-14 02:09:01.069044 | orchestrator | Wednesday 14 May 2025 02:09:01 +0000 (0:00:00.289) 0:00:00.409 ********* 2025-05-14 02:09:04.809111 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:09:04.809196 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:09:04.809207 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:09:04.809219 | orchestrator | ok: [testbed-manager] 2025-05-14 02:09:04.812194 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:09:04.812284 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:09:04.816702 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:09:04.816754 | orchestrator | 2025-05-14 02:09:04.819452 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-05-14 02:09:04.819489 | orchestrator | 2025-05-14 02:09:04.819500 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-14 02:09:04.819511 | orchestrator | Wednesday 14 May 2025 02:09:04 +0000 (0:00:03.744) 0:00:04.153 ********* 2025-05-14 02:09:04.962095 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-14 02:09:04.963736 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-14 02:09:05.060282 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-05-14 02:09:05.148256 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:09:05.148352 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-14 02:09:05.148365 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-05-14 02:09:05.148377 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-14 02:09:05.148389 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:09:05.221209 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-14 02:09:05.221306 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-14 02:09:05.223210 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:09:05.223615 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-05-14 02:09:05.225381 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-05-14 02:09:05.227229 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-14 02:09:05.227326 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-14 02:09:05.544891 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-14 02:09:05.545267 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-14 02:09:05.549602 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-14 02:09:05.549788 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-14 02:09:05.554193 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 02:09:05.554691 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:09:05.555549 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-14 02:09:05.555889 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-14 02:09:05.556566 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-14 02:09:05.557190 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 02:09:05.559643 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-14 02:09:05.560426 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-05-14 02:09:05.562082 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 02:09:05.562804 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-14 02:09:05.564442 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:09:05.565097 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-14 02:09:05.565626 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:09:05.565871 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-14 02:09:05.566515 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-14 02:09:05.566745 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-14 02:09:05.567456 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-14 02:09:05.567757 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-14 02:09:05.568228 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-05-14 02:09:05.568513 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-14 02:09:05.569070 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-14 02:09:05.569325 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-14 02:09:05.571007 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-14 02:09:05.571786 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-14 02:09:05.572908 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:09:05.574117 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-14 02:09:05.574963 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-14 02:09:05.575765 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:09:05.576708 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-14 02:09:05.578084 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-14 02:09:05.580794 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-14 02:09:05.581221 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-14 02:09:05.581964 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-14 02:09:05.582847 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:09:05.583119 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-14 02:09:05.584904 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-14 02:09:05.586860 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:09:05.587561 | orchestrator | 2025-05-14 02:09:05.589050 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-05-14 02:09:05.591169 | orchestrator | 2025-05-14 02:09:05.591304 | orchestrator | TASK [osism.commons.hostname : Set hostname_name fact] ************************* 2025-05-14 02:09:05.591517 | orchestrator | Wednesday 14 May 2025 02:09:05 +0000 (0:00:00.736) 0:00:04.890 ********* 2025-05-14 02:09:05.647868 | orchestrator | ok: [testbed-manager] 2025-05-14 02:09:05.685035 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:09:05.709034 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:09:05.736105 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:09:05.788236 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:09:05.789167 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:09:05.790347 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:09:05.791426 | orchestrator | 2025-05-14 02:09:05.792442 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-05-14 02:09:05.793054 | orchestrator | Wednesday 14 May 2025 02:09:05 +0000 (0:00:00.244) 0:00:05.134 ********* 2025-05-14 02:09:07.102332 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:09:07.102823 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:09:07.103675 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:09:07.104149 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:09:07.104752 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:09:07.105535 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:09:07.108029 | orchestrator | ok: [testbed-manager] 2025-05-14 02:09:07.108974 | orchestrator | 2025-05-14 02:09:07.110393 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-05-14 02:09:07.110757 | orchestrator | Wednesday 14 May 2025 02:09:07 +0000 (0:00:01.314) 0:00:06.448 ********* 2025-05-14 02:09:08.511882 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:09:08.516046 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:09:08.516611 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:09:08.517664 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:09:08.518898 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:09:08.518926 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:09:08.519701 | orchestrator | ok: [testbed-manager] 2025-05-14 02:09:08.520299 | orchestrator | 2025-05-14 02:09:08.520711 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-05-14 02:09:08.521390 | orchestrator | Wednesday 14 May 2025 02:09:08 +0000 (0:00:01.406) 0:00:07.854 ********* 2025-05-14 02:09:08.867551 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:09:08.867674 | orchestrator | 2025-05-14 02:09:08.871933 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-05-14 02:09:08.871964 | orchestrator | Wednesday 14 May 2025 02:09:08 +0000 (0:00:00.357) 0:00:08.212 ********* 2025-05-14 02:09:11.190754 | orchestrator | changed: [testbed-manager] 2025-05-14 02:09:11.191610 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:09:11.192334 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:09:11.193603 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:09:11.195350 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:09:11.195885 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:09:11.197397 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:09:11.197493 | orchestrator | 2025-05-14 02:09:11.198626 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-05-14 02:09:11.199673 | orchestrator | Wednesday 14 May 2025 02:09:11 +0000 (0:00:02.321) 0:00:10.533 ********* 2025-05-14 02:09:11.274261 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:09:11.490327 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:09:11.490425 | orchestrator | 2025-05-14 02:09:11.494160 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-05-14 02:09:11.494647 | orchestrator | Wednesday 14 May 2025 02:09:11 +0000 (0:00:00.296) 0:00:10.831 ********* 2025-05-14 02:09:12.571505 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:09:12.571833 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:09:12.572395 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:09:12.574454 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:09:12.575302 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:09:12.575842 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:09:12.577328 | orchestrator | 2025-05-14 02:09:12.577354 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-05-14 02:09:12.577706 | orchestrator | Wednesday 14 May 2025 02:09:12 +0000 (0:00:01.084) 0:00:11.916 ********* 2025-05-14 02:09:12.668682 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:09:13.216448 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:09:13.217126 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:09:13.217601 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:09:13.218191 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:09:13.218464 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:09:13.218806 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:09:13.219282 | orchestrator | 2025-05-14 02:09:13.219728 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-05-14 02:09:13.220084 | orchestrator | Wednesday 14 May 2025 02:09:13 +0000 (0:00:00.645) 0:00:12.561 ********* 2025-05-14 02:09:13.340801 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:09:13.367632 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:09:13.401980 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:09:13.720454 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:09:13.722128 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:09:13.723943 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:09:13.724879 | orchestrator | ok: [testbed-manager] 2025-05-14 02:09:13.725875 | orchestrator | 2025-05-14 02:09:13.726308 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-14 02:09:13.726922 | orchestrator | Wednesday 14 May 2025 02:09:13 +0000 (0:00:00.499) 0:00:13.061 ********* 2025-05-14 02:09:13.804317 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:09:13.833194 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:09:13.862156 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:09:13.900180 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:09:13.973068 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:09:13.974340 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:09:13.975872 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:09:13.978801 | orchestrator | 2025-05-14 02:09:13.981554 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-14 02:09:13.982463 | orchestrator | Wednesday 14 May 2025 02:09:13 +0000 (0:00:00.254) 0:00:13.315 ********* 2025-05-14 02:09:14.334376 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:09:14.335375 | orchestrator | 2025-05-14 02:09:14.336762 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-14 02:09:14.337754 | orchestrator | Wednesday 14 May 2025 02:09:14 +0000 (0:00:00.357) 0:00:13.673 ********* 2025-05-14 02:09:14.698738 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:09:14.698908 | orchestrator | 2025-05-14 02:09:14.699565 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-14 02:09:14.701068 | orchestrator | Wednesday 14 May 2025 02:09:14 +0000 (0:00:00.369) 0:00:14.043 ********* 2025-05-14 02:09:16.098523 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:09:16.098702 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:09:16.098721 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:09:16.098793 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:09:16.098828 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:09:16.099405 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:09:16.099520 | orchestrator | ok: [testbed-manager] 2025-05-14 02:09:16.100958 | orchestrator | 2025-05-14 02:09:16.101086 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-14 02:09:16.101339 | orchestrator | Wednesday 14 May 2025 02:09:16 +0000 (0:00:01.390) 0:00:15.433 ********* 2025-05-14 02:09:16.187425 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:09:16.220935 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:09:16.255035 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:09:16.287835 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:09:16.370709 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:09:16.370885 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:09:16.372533 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:09:16.376573 | orchestrator | 2025-05-14 02:09:16.376842 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-14 02:09:16.377280 | orchestrator | Wednesday 14 May 2025 02:09:16 +0000 (0:00:00.281) 0:00:15.714 ********* 2025-05-14 02:09:17.041258 | orchestrator | ok: [testbed-manager] 2025-05-14 02:09:17.041414 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:09:17.042892 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:09:17.046249 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:09:17.046301 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:09:17.047028 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:09:17.047999 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:09:17.048918 | orchestrator | 2025-05-14 02:09:17.049842 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-14 02:09:17.050815 | orchestrator | Wednesday 14 May 2025 02:09:17 +0000 (0:00:00.667) 0:00:16.382 ********* 2025-05-14 02:09:17.128085 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:09:17.160119 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:09:17.192243 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:09:17.225415 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:09:17.321236 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:09:17.322865 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:09:17.324389 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:09:17.325877 | orchestrator | 2025-05-14 02:09:17.327385 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-14 02:09:17.328412 | orchestrator | Wednesday 14 May 2025 02:09:17 +0000 (0:00:00.282) 0:00:16.665 ********* 2025-05-14 02:09:17.887330 | orchestrator | ok: [testbed-manager] 2025-05-14 02:09:17.889095 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:09:17.894451 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:09:17.897472 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:09:17.898176 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:09:17.900738 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:09:17.901205 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:09:17.901737 | orchestrator | 2025-05-14 02:09:17.902288 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-14 02:09:17.903165 | orchestrator | Wednesday 14 May 2025 02:09:17 +0000 (0:00:00.567) 0:00:17.232 ********* 2025-05-14 02:09:19.116046 | orchestrator | ok: [testbed-manager] 2025-05-14 02:09:19.118341 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:09:19.118850 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:09:19.119980 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:09:19.123127 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:09:19.124626 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:09:19.124983 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:09:19.125692 | orchestrator | 2025-05-14 02:09:19.126111 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-14 02:09:19.126690 | orchestrator | Wednesday 14 May 2025 02:09:19 +0000 (0:00:01.226) 0:00:18.459 ********* 2025-05-14 02:09:20.447553 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:09:20.448732 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:09:20.450527 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:09:20.450829 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:09:20.454220 | orchestrator | ok: [testbed-manager] 2025-05-14 02:09:20.455547 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:09:20.456565 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:09:20.456933 | orchestrator | 2025-05-14 02:09:20.457380 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-14 02:09:20.458261 | orchestrator | Wednesday 14 May 2025 02:09:20 +0000 (0:00:01.328) 0:00:19.787 ********* 2025-05-14 02:09:20.834836 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:09:20.835046 | orchestrator | 2025-05-14 02:09:20.835833 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-14 02:09:20.836402 | orchestrator | Wednesday 14 May 2025 02:09:20 +0000 (0:00:00.392) 0:00:20.180 ********* 2025-05-14 02:09:20.928731 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:09:22.474322 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:09:22.474974 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:09:22.475747 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:09:22.477140 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:09:22.478516 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:09:22.480732 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:09:22.481495 | orchestrator | 2025-05-14 02:09:22.482677 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-14 02:09:22.483585 | orchestrator | Wednesday 14 May 2025 02:09:22 +0000 (0:00:01.638) 0:00:21.818 ********* 2025-05-14 02:09:22.561571 | orchestrator | ok: [testbed-manager] 2025-05-14 02:09:22.596051 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:09:22.623374 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:09:22.658170 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:09:22.723778 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:09:22.724484 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:09:22.725483 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:09:22.727776 | orchestrator | 2025-05-14 02:09:22.727917 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-14 02:09:22.728741 | orchestrator | Wednesday 14 May 2025 02:09:22 +0000 (0:00:00.250) 0:00:22.069 ********* 2025-05-14 02:09:22.823140 | orchestrator | ok: [testbed-manager] 2025-05-14 02:09:22.861038 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:09:22.900073 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:09:22.937717 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:09:23.024782 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:09:23.024907 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:09:23.025177 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:09:23.025966 | orchestrator | 2025-05-14 02:09:23.026533 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-14 02:09:23.026976 | orchestrator | Wednesday 14 May 2025 02:09:23 +0000 (0:00:00.301) 0:00:22.370 ********* 2025-05-14 02:09:23.179695 | orchestrator | ok: [testbed-manager] 2025-05-14 02:09:23.200649 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:09:23.234972 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:09:23.317214 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:09:23.319961 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:09:23.320068 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:09:23.321015 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:09:23.321995 | orchestrator | 2025-05-14 02:09:23.322941 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-14 02:09:23.324239 | orchestrator | Wednesday 14 May 2025 02:09:23 +0000 (0:00:00.290) 0:00:22.661 ********* 2025-05-14 02:09:23.691279 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:09:23.697065 | orchestrator | 2025-05-14 02:09:23.697149 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-14 02:09:23.697165 | orchestrator | Wednesday 14 May 2025 02:09:23 +0000 (0:00:00.371) 0:00:23.033 ********* 2025-05-14 02:09:24.384102 | orchestrator | ok: [testbed-manager] 2025-05-14 02:09:24.385809 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:09:24.386843 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:09:24.387979 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:09:24.389301 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:09:24.390740 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:09:24.391161 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:09:24.391934 | orchestrator | 2025-05-14 02:09:24.392851 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-14 02:09:24.393366 | orchestrator | Wednesday 14 May 2025 02:09:24 +0000 (0:00:00.693) 0:00:23.726 ********* 2025-05-14 02:09:24.495072 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:09:24.523778 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:09:24.552267 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:09:24.673369 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:09:24.673445 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:09:24.674074 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:09:24.674564 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:09:24.675677 | orchestrator | 2025-05-14 02:09:24.675929 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-14 02:09:24.676494 | orchestrator | Wednesday 14 May 2025 02:09:24 +0000 (0:00:00.292) 0:00:24.019 ********* 2025-05-14 02:09:25.868969 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:09:25.871180 | orchestrator | changed: [testbed-manager] 2025-05-14 02:09:25.872119 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:09:25.872512 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:09:25.873095 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:09:25.873341 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:09:25.874115 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:09:25.874848 | orchestrator | 2025-05-14 02:09:25.874908 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-14 02:09:25.875698 | orchestrator | Wednesday 14 May 2025 02:09:25 +0000 (0:00:01.194) 0:00:25.214 ********* 2025-05-14 02:09:26.479156 | orchestrator | ok: [testbed-manager] 2025-05-14 02:09:26.479238 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:09:26.479819 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:09:26.479833 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:09:26.482660 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:09:26.482681 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:09:26.483930 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:09:26.485698 | orchestrator | 2025-05-14 02:09:26.487137 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-14 02:09:26.487906 | orchestrator | Wednesday 14 May 2025 02:09:26 +0000 (0:00:00.611) 0:00:25.826 ********* 2025-05-14 02:09:27.661423 | orchestrator | ok: [testbed-manager] 2025-05-14 02:09:27.661508 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:09:27.665759 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:09:27.667075 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:09:27.668542 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:09:27.669466 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:09:27.669853 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:09:27.670665 | orchestrator | 2025-05-14 02:09:27.673442 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-14 02:09:27.674662 | orchestrator | Wednesday 14 May 2025 02:09:27 +0000 (0:00:01.177) 0:00:27.004 ********* 2025-05-14 02:09:41.845972 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:09:41.846144 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:09:41.846164 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:09:41.848797 | orchestrator | changed: [testbed-manager] 2025-05-14 02:09:41.849146 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:09:41.850196 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:09:41.850382 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:09:41.851069 | orchestrator | 2025-05-14 02:09:41.851093 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-05-14 02:09:41.851748 | orchestrator | Wednesday 14 May 2025 02:09:41 +0000 (0:00:14.181) 0:00:41.186 ********* 2025-05-14 02:09:41.929473 | orchestrator | ok: [testbed-manager] 2025-05-14 02:09:41.948356 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:09:41.971075 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:09:41.989137 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:09:42.049282 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:09:42.049479 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:09:42.050353 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:09:42.051345 | orchestrator | 2025-05-14 02:09:42.052277 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-05-14 02:09:42.052811 | orchestrator | Wednesday 14 May 2025 02:09:42 +0000 (0:00:00.210) 0:00:41.396 ********* 2025-05-14 02:09:42.108896 | orchestrator | ok: [testbed-manager] 2025-05-14 02:09:42.131167 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:09:42.155080 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:09:42.176459 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:09:42.220045 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:09:42.220875 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:09:42.221149 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:09:42.224054 | orchestrator | 2025-05-14 02:09:42.224148 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-05-14 02:09:42.224165 | orchestrator | Wednesday 14 May 2025 02:09:42 +0000 (0:00:00.171) 0:00:41.567 ********* 2025-05-14 02:09:42.303951 | orchestrator | ok: [testbed-manager] 2025-05-14 02:09:42.326317 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:09:42.357408 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:09:42.415296 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:09:42.415802 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:09:42.416904 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:09:42.417525 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:09:42.418259 | orchestrator | 2025-05-14 02:09:42.419147 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-05-14 02:09:42.419956 | orchestrator | Wednesday 14 May 2025 02:09:42 +0000 (0:00:00.195) 0:00:41.762 ********* 2025-05-14 02:09:42.670186 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:09:42.670670 | orchestrator | 2025-05-14 02:09:42.672102 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-05-14 02:09:42.673184 | orchestrator | Wednesday 14 May 2025 02:09:42 +0000 (0:00:00.251) 0:00:42.014 ********* 2025-05-14 02:09:44.162477 | orchestrator | ok: [testbed-manager] 2025-05-14 02:09:44.162586 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:09:44.162603 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:09:44.162870 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:09:44.164707 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:09:44.165741 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:09:44.166306 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:09:44.167294 | orchestrator | 2025-05-14 02:09:44.168871 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-05-14 02:09:44.168969 | orchestrator | Wednesday 14 May 2025 02:09:44 +0000 (0:00:01.492) 0:00:43.506 ********* 2025-05-14 02:09:45.168510 | orchestrator | changed: [testbed-manager] 2025-05-14 02:09:45.168657 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:09:45.169113 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:09:45.170102 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:09:45.170463 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:09:45.170696 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:09:45.172065 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:09:45.172423 | orchestrator | 2025-05-14 02:09:45.172634 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-05-14 02:09:45.172893 | orchestrator | Wednesday 14 May 2025 02:09:45 +0000 (0:00:01.007) 0:00:44.513 ********* 2025-05-14 02:09:46.034685 | orchestrator | ok: [testbed-manager] 2025-05-14 02:09:46.035332 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:09:46.035930 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:09:46.036751 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:09:46.038217 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:09:46.039288 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:09:46.041490 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:09:46.042148 | orchestrator | 2025-05-14 02:09:46.042893 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-05-14 02:09:46.043766 | orchestrator | Wednesday 14 May 2025 02:09:46 +0000 (0:00:00.867) 0:00:45.381 ********* 2025-05-14 02:09:46.321828 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:09:46.322808 | orchestrator | 2025-05-14 02:09:46.322844 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-05-14 02:09:46.323195 | orchestrator | Wednesday 14 May 2025 02:09:46 +0000 (0:00:00.282) 0:00:45.663 ********* 2025-05-14 02:09:47.365155 | orchestrator | changed: [testbed-manager] 2025-05-14 02:09:47.366582 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:09:47.367293 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:09:47.368018 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:09:47.368752 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:09:47.371892 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:09:47.371922 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:09:47.371935 | orchestrator | 2025-05-14 02:09:47.371948 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-05-14 02:09:47.371961 | orchestrator | Wednesday 14 May 2025 02:09:47 +0000 (0:00:01.045) 0:00:46.709 ********* 2025-05-14 02:09:47.471912 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:09:47.494135 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:09:47.525798 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:09:47.549143 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:09:47.669141 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:09:47.669251 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:09:47.669290 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:09:47.669310 | orchestrator | 2025-05-14 02:09:47.669329 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-05-14 02:09:47.670458 | orchestrator | Wednesday 14 May 2025 02:09:47 +0000 (0:00:00.302) 0:00:47.011 ********* 2025-05-14 02:10:01.083889 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:10:01.084004 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:10:01.084020 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:10:01.084031 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:10:01.084042 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:10:01.084052 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:10:01.084125 | orchestrator | changed: [testbed-manager] 2025-05-14 02:10:01.084392 | orchestrator | 2025-05-14 02:10:01.084782 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-05-14 02:10:01.085457 | orchestrator | Wednesday 14 May 2025 02:10:01 +0000 (0:00:13.410) 0:01:00.421 ********* 2025-05-14 02:10:02.674862 | orchestrator | ok: [testbed-manager] 2025-05-14 02:10:02.675192 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:10:02.675263 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:10:02.675977 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:10:02.676331 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:10:02.677090 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:10:02.677542 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:10:02.678368 | orchestrator | 2025-05-14 02:10:02.678484 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-05-14 02:10:02.678887 | orchestrator | Wednesday 14 May 2025 02:10:02 +0000 (0:00:01.593) 0:01:02.014 ********* 2025-05-14 02:10:03.631038 | orchestrator | ok: [testbed-manager] 2025-05-14 02:10:03.631514 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:10:03.632598 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:10:03.634391 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:10:03.635726 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:10:03.637392 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:10:03.638690 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:10:03.640105 | orchestrator | 2025-05-14 02:10:03.641182 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-05-14 02:10:03.642476 | orchestrator | Wednesday 14 May 2025 02:10:03 +0000 (0:00:00.960) 0:01:02.974 ********* 2025-05-14 02:10:03.718710 | orchestrator | ok: [testbed-manager] 2025-05-14 02:10:03.753927 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:10:03.781755 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:10:03.805878 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:10:03.861938 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:10:03.862209 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:10:03.863258 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:10:03.863945 | orchestrator | 2025-05-14 02:10:03.864865 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-05-14 02:10:03.865596 | orchestrator | Wednesday 14 May 2025 02:10:03 +0000 (0:00:00.232) 0:01:03.207 ********* 2025-05-14 02:10:03.951278 | orchestrator | ok: [testbed-manager] 2025-05-14 02:10:03.981030 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:10:04.010093 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:10:04.039608 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:10:04.099754 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:10:04.100080 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:10:04.100657 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:10:04.101171 | orchestrator | 2025-05-14 02:10:04.101847 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-05-14 02:10:04.103109 | orchestrator | Wednesday 14 May 2025 02:10:04 +0000 (0:00:00.238) 0:01:03.446 ********* 2025-05-14 02:10:04.400570 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:10:04.401056 | orchestrator | 2025-05-14 02:10:04.402064 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-05-14 02:10:04.402563 | orchestrator | Wednesday 14 May 2025 02:10:04 +0000 (0:00:00.294) 0:01:03.741 ********* 2025-05-14 02:10:05.865553 | orchestrator | ok: [testbed-manager] 2025-05-14 02:10:05.865802 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:10:05.865828 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:10:05.868100 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:10:05.868155 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:10:05.868288 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:10:05.868314 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:10:05.868841 | orchestrator | 2025-05-14 02:10:05.869525 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-05-14 02:10:05.870102 | orchestrator | Wednesday 14 May 2025 02:10:05 +0000 (0:00:01.467) 0:01:05.208 ********* 2025-05-14 02:10:06.432006 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:10:06.432952 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:10:06.433029 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:10:06.433969 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:10:06.434775 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:10:06.435341 | orchestrator | changed: [testbed-manager] 2025-05-14 02:10:06.436144 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:10:06.438129 | orchestrator | 2025-05-14 02:10:06.438443 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-05-14 02:10:06.439323 | orchestrator | Wednesday 14 May 2025 02:10:06 +0000 (0:00:00.566) 0:01:05.775 ********* 2025-05-14 02:10:06.504338 | orchestrator | ok: [testbed-manager] 2025-05-14 02:10:06.532437 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:10:06.555113 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:10:06.578719 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:10:06.642937 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:10:06.646397 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:10:06.646713 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:10:06.647189 | orchestrator | 2025-05-14 02:10:06.647828 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-05-14 02:10:06.647914 | orchestrator | Wednesday 14 May 2025 02:10:06 +0000 (0:00:00.211) 0:01:05.986 ********* 2025-05-14 02:10:07.692270 | orchestrator | ok: [testbed-manager] 2025-05-14 02:10:07.692426 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:10:07.694297 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:10:07.695294 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:10:07.696915 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:10:07.698390 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:10:07.699275 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:10:07.700477 | orchestrator | 2025-05-14 02:10:07.701696 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-05-14 02:10:07.703052 | orchestrator | Wednesday 14 May 2025 02:10:07 +0000 (0:00:01.049) 0:01:07.036 ********* 2025-05-14 02:10:09.234214 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:10:09.235059 | orchestrator | changed: [testbed-manager] 2025-05-14 02:10:09.235915 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:10:09.236665 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:10:09.239664 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:10:09.239684 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:10:09.239693 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:10:09.239703 | orchestrator | 2025-05-14 02:10:09.240349 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-05-14 02:10:09.240950 | orchestrator | Wednesday 14 May 2025 02:10:09 +0000 (0:00:01.541) 0:01:08.578 ********* 2025-05-14 02:10:11.286562 | orchestrator | ok: [testbed-manager] 2025-05-14 02:10:11.286792 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:10:11.287452 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:10:11.290924 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:10:11.290968 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:10:11.290983 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:10:11.290999 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:10:11.291015 | orchestrator | 2025-05-14 02:10:11.291033 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-05-14 02:10:11.291051 | orchestrator | Wednesday 14 May 2025 02:10:11 +0000 (0:00:02.052) 0:01:10.630 ********* 2025-05-14 02:10:50.369133 | orchestrator | ok: [testbed-manager] 2025-05-14 02:10:50.369255 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:10:50.369271 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:10:50.369283 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:10:50.369295 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:10:50.369306 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:10:50.369317 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:10:50.369328 | orchestrator | 2025-05-14 02:10:50.369340 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-05-14 02:10:50.369416 | orchestrator | Wednesday 14 May 2025 02:10:50 +0000 (0:00:39.075) 0:01:49.706 ********* 2025-05-14 02:12:12.632847 | orchestrator | changed: [testbed-manager] 2025-05-14 02:12:12.632986 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:12:12.633267 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:12:12.634704 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:12:12.635586 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:12:12.636620 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:12:12.637945 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:12:12.638104 | orchestrator | 2025-05-14 02:12:12.638984 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-05-14 02:12:12.640186 | orchestrator | Wednesday 14 May 2025 02:12:12 +0000 (0:01:22.267) 0:03:11.973 ********* 2025-05-14 02:12:14.288567 | orchestrator | ok: [testbed-manager] 2025-05-14 02:12:14.290758 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:12:14.290776 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:12:14.290781 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:12:14.290786 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:12:14.290791 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:12:14.291557 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:12:14.292109 | orchestrator | 2025-05-14 02:12:14.292826 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-05-14 02:12:14.293138 | orchestrator | Wednesday 14 May 2025 02:12:14 +0000 (0:00:01.655) 0:03:13.629 ********* 2025-05-14 02:12:26.982365 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:12:26.982885 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:12:26.982914 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:12:26.982927 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:12:26.982939 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:12:26.985813 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:12:26.987450 | orchestrator | changed: [testbed-manager] 2025-05-14 02:12:26.988389 | orchestrator | 2025-05-14 02:12:26.990308 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-05-14 02:12:26.990489 | orchestrator | Wednesday 14 May 2025 02:12:26 +0000 (0:00:12.693) 0:03:26.323 ********* 2025-05-14 02:12:27.410381 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-05-14 02:12:27.411056 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-05-14 02:12:27.411283 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-05-14 02:12:27.415162 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-05-14 02:12:27.415195 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-05-14 02:12:27.415208 | orchestrator | 2025-05-14 02:12:27.415221 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-05-14 02:12:27.415234 | orchestrator | Wednesday 14 May 2025 02:12:27 +0000 (0:00:00.431) 0:03:26.755 ********* 2025-05-14 02:12:27.469200 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-14 02:12:27.509609 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-14 02:12:27.511166 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:12:27.511206 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-14 02:12:27.539500 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:12:27.573187 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:12:27.573922 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-14 02:12:27.597182 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:12:29.177141 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-14 02:12:29.177446 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-14 02:12:29.178109 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-14 02:12:29.179418 | orchestrator | 2025-05-14 02:12:29.180502 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-05-14 02:12:29.181260 | orchestrator | Wednesday 14 May 2025 02:12:29 +0000 (0:00:01.766) 0:03:28.521 ********* 2025-05-14 02:12:29.245882 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-14 02:12:29.245991 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-14 02:12:29.246405 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-14 02:12:29.248043 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-14 02:12:29.248333 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-14 02:12:29.249051 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-14 02:12:29.250067 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-14 02:12:29.250517 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-14 02:12:29.250887 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-14 02:12:29.251300 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-14 02:12:29.251925 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-14 02:12:29.252427 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-14 02:12:29.290091 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:12:29.290424 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-14 02:12:29.290521 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-14 02:12:29.290640 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-14 02:12:29.360883 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-14 02:12:29.361003 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-14 02:12:29.361025 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-14 02:12:29.361069 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-14 02:12:29.361090 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-14 02:12:29.361709 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-14 02:12:29.361908 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-14 02:12:29.362177 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-14 02:12:29.363651 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-14 02:12:29.363792 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-14 02:12:29.364091 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-14 02:12:29.364317 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-14 02:12:29.364916 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-14 02:12:29.365720 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-14 02:12:29.366285 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-14 02:12:29.366317 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-14 02:12:29.366585 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-14 02:12:29.367143 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-14 02:12:29.367363 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-14 02:12:29.367885 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-14 02:12:29.369155 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-14 02:12:29.385894 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-14 02:12:29.386069 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:12:29.386150 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-14 02:12:29.386707 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-14 02:12:29.386882 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-14 02:12:29.417992 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:12:35.027357 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:12:35.028797 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-14 02:12:35.029997 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-14 02:12:35.030104 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-14 02:12:35.031426 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-14 02:12:35.032347 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-14 02:12:35.032816 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-14 02:12:35.033932 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-14 02:12:35.034448 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-14 02:12:35.035307 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-14 02:12:35.036429 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-14 02:12:35.037437 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-14 02:12:35.038067 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-14 02:12:35.039085 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-14 02:12:35.039932 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-14 02:12:35.041236 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-14 02:12:35.042269 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-14 02:12:35.043531 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-14 02:12:35.044440 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-14 02:12:35.044996 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-14 02:12:35.045401 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-14 02:12:35.045882 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-14 02:12:35.046219 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-14 02:12:35.046822 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-14 02:12:35.047367 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-14 02:12:35.047919 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-14 02:12:35.048176 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-14 02:12:35.048736 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-14 02:12:35.048965 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-14 02:12:35.049313 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-14 02:12:35.050592 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-14 02:12:35.050615 | orchestrator | 2025-05-14 02:12:35.050630 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-05-14 02:12:35.050698 | orchestrator | Wednesday 14 May 2025 02:12:35 +0000 (0:00:05.849) 0:03:34.371 ********* 2025-05-14 02:12:35.667297 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-14 02:12:35.667403 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-14 02:12:35.667486 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-14 02:12:35.667742 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-14 02:12:35.668529 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-14 02:12:35.668713 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-14 02:12:35.668926 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-14 02:12:35.669541 | orchestrator | 2025-05-14 02:12:35.669951 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-05-14 02:12:35.673894 | orchestrator | Wednesday 14 May 2025 02:12:35 +0000 (0:00:00.632) 0:03:35.004 ********* 2025-05-14 02:12:35.730578 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-14 02:12:35.763528 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:12:35.917244 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-14 02:12:35.917438 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-14 02:12:36.233486 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:12:36.233621 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:12:36.233963 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-14 02:12:36.234897 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:12:36.237019 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-14 02:12:36.238701 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-14 02:12:36.238795 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-14 02:12:36.244112 | orchestrator | 2025-05-14 02:12:36.244176 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-05-14 02:12:36.244195 | orchestrator | Wednesday 14 May 2025 02:12:36 +0000 (0:00:00.573) 0:03:35.578 ********* 2025-05-14 02:12:36.291342 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-14 02:12:36.320556 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:12:36.401227 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-14 02:12:36.802209 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-14 02:12:36.802541 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:12:36.803804 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:12:36.804224 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-14 02:12:36.805199 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:12:36.805713 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-14 02:12:36.806513 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-14 02:12:36.806942 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-14 02:12:36.807616 | orchestrator | 2025-05-14 02:12:36.808229 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-05-14 02:12:36.808432 | orchestrator | Wednesday 14 May 2025 02:12:36 +0000 (0:00:00.568) 0:03:36.146 ********* 2025-05-14 02:12:36.884580 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:12:36.906514 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:12:36.936126 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:12:36.962262 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:12:37.091953 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:12:37.092324 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:12:37.093151 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:12:37.094162 | orchestrator | 2025-05-14 02:12:37.094187 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-05-14 02:12:37.094807 | orchestrator | Wednesday 14 May 2025 02:12:37 +0000 (0:00:00.289) 0:03:36.436 ********* 2025-05-14 02:12:43.023826 | orchestrator | ok: [testbed-manager] 2025-05-14 02:12:43.024197 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:12:43.024374 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:12:43.025574 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:12:43.025908 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:12:43.026927 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:12:43.027810 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:12:43.028029 | orchestrator | 2025-05-14 02:12:43.029009 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-05-14 02:12:43.029744 | orchestrator | Wednesday 14 May 2025 02:12:43 +0000 (0:00:05.932) 0:03:42.369 ********* 2025-05-14 02:12:43.105573 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-05-14 02:12:43.105880 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-05-14 02:12:43.147094 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:12:43.147433 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-05-14 02:12:43.196314 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:12:43.196613 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-05-14 02:12:43.244552 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:12:43.245219 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-05-14 02:12:43.278713 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:12:43.343606 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:12:43.344953 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-05-14 02:12:43.346406 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:12:43.347839 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-05-14 02:12:43.348877 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:12:43.349980 | orchestrator | 2025-05-14 02:12:43.350269 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-05-14 02:12:43.351130 | orchestrator | Wednesday 14 May 2025 02:12:43 +0000 (0:00:00.319) 0:03:42.688 ********* 2025-05-14 02:12:44.390286 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-05-14 02:12:44.390859 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-05-14 02:12:44.393937 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-05-14 02:12:44.393981 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-05-14 02:12:44.393994 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-05-14 02:12:44.394006 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-05-14 02:12:44.394426 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-05-14 02:12:44.395241 | orchestrator | 2025-05-14 02:12:44.396056 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-05-14 02:12:44.396784 | orchestrator | Wednesday 14 May 2025 02:12:44 +0000 (0:00:01.044) 0:03:43.733 ********* 2025-05-14 02:12:44.809353 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:12:44.809873 | orchestrator | 2025-05-14 02:12:44.810312 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-05-14 02:12:44.812216 | orchestrator | Wednesday 14 May 2025 02:12:44 +0000 (0:00:00.420) 0:03:44.154 ********* 2025-05-14 02:12:46.097257 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:12:46.097485 | orchestrator | ok: [testbed-manager] 2025-05-14 02:12:46.099421 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:12:46.100870 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:12:46.103202 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:12:46.104398 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:12:46.105307 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:12:46.106312 | orchestrator | 2025-05-14 02:12:46.107847 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-05-14 02:12:46.109413 | orchestrator | Wednesday 14 May 2025 02:12:46 +0000 (0:00:01.285) 0:03:45.439 ********* 2025-05-14 02:12:46.679417 | orchestrator | ok: [testbed-manager] 2025-05-14 02:12:46.680338 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:12:46.681138 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:12:46.682807 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:12:46.683146 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:12:46.684209 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:12:46.685191 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:12:46.686185 | orchestrator | 2025-05-14 02:12:46.687141 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-05-14 02:12:46.688049 | orchestrator | Wednesday 14 May 2025 02:12:46 +0000 (0:00:00.583) 0:03:46.023 ********* 2025-05-14 02:12:47.348921 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:12:47.349097 | orchestrator | changed: [testbed-manager] 2025-05-14 02:12:47.350563 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:12:47.350591 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:12:47.351610 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:12:47.352224 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:12:47.353010 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:12:47.353777 | orchestrator | 2025-05-14 02:12:47.354981 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-05-14 02:12:47.355470 | orchestrator | Wednesday 14 May 2025 02:12:47 +0000 (0:00:00.669) 0:03:46.693 ********* 2025-05-14 02:12:47.971986 | orchestrator | ok: [testbed-manager] 2025-05-14 02:12:47.972151 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:12:47.973359 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:12:47.974290 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:12:47.974959 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:12:47.976793 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:12:47.978195 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:12:47.978669 | orchestrator | 2025-05-14 02:12:47.979201 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-05-14 02:12:47.980127 | orchestrator | Wednesday 14 May 2025 02:12:47 +0000 (0:00:00.622) 0:03:47.315 ********* 2025-05-14 02:12:48.943191 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747187043.8440468, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:12:48.944009 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747187076.3939152, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:12:48.944078 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747187066.9836254, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:12:48.944880 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747187087.351549, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:12:48.945958 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747187074.7036977, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:12:48.946457 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747187087.6286185, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:12:48.947258 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747187079.945671, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:12:48.948089 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747187065.5731108, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:12:48.949019 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747186999.3721912, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:12:48.949682 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747186991.5919404, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:12:48.950322 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747187012.9862573, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:12:48.951209 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747187003.8543189, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:12:48.951774 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747186998.5641954, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:12:48.952205 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747187003.527483, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:12:48.952613 | orchestrator | 2025-05-14 02:12:48.953044 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-05-14 02:12:48.953474 | orchestrator | Wednesday 14 May 2025 02:12:48 +0000 (0:00:00.971) 0:03:48.287 ********* 2025-05-14 02:12:50.007563 | orchestrator | changed: [testbed-manager] 2025-05-14 02:12:50.008099 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:12:50.008358 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:12:50.010240 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:12:50.010430 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:12:50.011547 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:12:50.012452 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:12:50.012815 | orchestrator | 2025-05-14 02:12:50.013927 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-05-14 02:12:50.014303 | orchestrator | Wednesday 14 May 2025 02:12:49 +0000 (0:00:01.063) 0:03:49.351 ********* 2025-05-14 02:12:51.130879 | orchestrator | changed: [testbed-manager] 2025-05-14 02:12:51.133639 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:12:51.133743 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:12:51.134199 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:12:51.134940 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:12:51.135400 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:12:51.135940 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:12:51.136723 | orchestrator | 2025-05-14 02:12:51.137031 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-05-14 02:12:51.137615 | orchestrator | Wednesday 14 May 2025 02:12:51 +0000 (0:00:01.124) 0:03:50.475 ********* 2025-05-14 02:12:51.205405 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:12:51.288693 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:12:51.325914 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:12:51.362506 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:12:51.451953 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:12:51.452410 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:12:51.454809 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:12:51.454838 | orchestrator | 2025-05-14 02:12:51.456361 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-05-14 02:12:51.456634 | orchestrator | Wednesday 14 May 2025 02:12:51 +0000 (0:00:00.321) 0:03:50.796 ********* 2025-05-14 02:12:52.209691 | orchestrator | ok: [testbed-manager] 2025-05-14 02:12:52.210443 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:12:52.210739 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:12:52.211836 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:12:52.212353 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:12:52.212812 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:12:52.213539 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:12:52.213984 | orchestrator | 2025-05-14 02:12:52.214730 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-05-14 02:12:52.215254 | orchestrator | Wednesday 14 May 2025 02:12:52 +0000 (0:00:00.758) 0:03:51.555 ********* 2025-05-14 02:12:52.618427 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:12:52.618720 | orchestrator | 2025-05-14 02:12:52.621818 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-05-14 02:12:52.621849 | orchestrator | Wednesday 14 May 2025 02:12:52 +0000 (0:00:00.407) 0:03:51.962 ********* 2025-05-14 02:12:59.949280 | orchestrator | ok: [testbed-manager] 2025-05-14 02:12:59.954088 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:12:59.960887 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:12:59.961433 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:12:59.962201 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:12:59.963166 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:12:59.964011 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:12:59.964338 | orchestrator | 2025-05-14 02:12:59.965536 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-05-14 02:12:59.966604 | orchestrator | Wednesday 14 May 2025 02:12:59 +0000 (0:00:07.329) 0:03:59.291 ********* 2025-05-14 02:13:01.117740 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:13:01.118013 | orchestrator | ok: [testbed-manager] 2025-05-14 02:13:01.119022 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:13:01.120114 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:13:01.120961 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:13:01.121746 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:13:01.122639 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:13:01.123392 | orchestrator | 2025-05-14 02:13:01.124650 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-05-14 02:13:01.125642 | orchestrator | Wednesday 14 May 2025 02:13:01 +0000 (0:00:01.167) 0:04:00.459 ********* 2025-05-14 02:13:02.123579 | orchestrator | ok: [testbed-manager] 2025-05-14 02:13:02.124430 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:13:02.125414 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:13:02.126423 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:13:02.127152 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:13:02.128159 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:13:02.128775 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:13:02.129414 | orchestrator | 2025-05-14 02:13:02.130211 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-05-14 02:13:02.130738 | orchestrator | Wednesday 14 May 2025 02:13:02 +0000 (0:00:01.005) 0:04:01.465 ********* 2025-05-14 02:13:02.564463 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:13:02.565131 | orchestrator | 2025-05-14 02:13:02.566225 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-05-14 02:13:02.566966 | orchestrator | Wednesday 14 May 2025 02:13:02 +0000 (0:00:00.441) 0:04:01.906 ********* 2025-05-14 02:13:11.176237 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:13:11.177209 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:13:11.177614 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:13:11.178509 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:13:11.179178 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:13:11.179866 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:13:11.183716 | orchestrator | changed: [testbed-manager] 2025-05-14 02:13:11.184269 | orchestrator | 2025-05-14 02:13:11.184862 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-05-14 02:13:11.185795 | orchestrator | Wednesday 14 May 2025 02:13:11 +0000 (0:00:08.612) 0:04:10.519 ********* 2025-05-14 02:13:11.919089 | orchestrator | changed: [testbed-manager] 2025-05-14 02:13:11.920370 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:13:11.921886 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:13:11.921954 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:13:11.922689 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:13:11.923905 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:13:11.924817 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:13:11.925638 | orchestrator | 2025-05-14 02:13:11.926279 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-05-14 02:13:11.927026 | orchestrator | Wednesday 14 May 2025 02:13:11 +0000 (0:00:00.744) 0:04:11.264 ********* 2025-05-14 02:13:13.038104 | orchestrator | changed: [testbed-manager] 2025-05-14 02:13:13.038250 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:13:13.039287 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:13:13.040271 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:13:13.041597 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:13:13.042621 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:13:13.043497 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:13:13.044091 | orchestrator | 2025-05-14 02:13:13.044622 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-05-14 02:13:13.045291 | orchestrator | Wednesday 14 May 2025 02:13:13 +0000 (0:00:01.116) 0:04:12.381 ********* 2025-05-14 02:13:14.049886 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:13:14.050388 | orchestrator | changed: [testbed-manager] 2025-05-14 02:13:14.051071 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:13:14.051858 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:13:14.055146 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:13:14.055628 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:13:14.056237 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:13:14.056689 | orchestrator | 2025-05-14 02:13:14.057241 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-05-14 02:13:14.057609 | orchestrator | Wednesday 14 May 2025 02:13:14 +0000 (0:00:01.012) 0:04:13.393 ********* 2025-05-14 02:13:14.166285 | orchestrator | ok: [testbed-manager] 2025-05-14 02:13:14.204045 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:13:14.235556 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:13:14.266900 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:13:14.332166 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:13:14.333003 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:13:14.334094 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:13:14.336135 | orchestrator | 2025-05-14 02:13:14.336791 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-05-14 02:13:14.337575 | orchestrator | Wednesday 14 May 2025 02:13:14 +0000 (0:00:00.285) 0:04:13.678 ********* 2025-05-14 02:13:14.424650 | orchestrator | ok: [testbed-manager] 2025-05-14 02:13:14.500238 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:13:14.537477 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:13:14.576102 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:13:14.679385 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:13:14.679910 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:13:14.680728 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:13:14.681971 | orchestrator | 2025-05-14 02:13:14.682471 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-05-14 02:13:14.683089 | orchestrator | Wednesday 14 May 2025 02:13:14 +0000 (0:00:00.345) 0:04:14.024 ********* 2025-05-14 02:13:14.782073 | orchestrator | ok: [testbed-manager] 2025-05-14 02:13:14.816422 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:13:14.850709 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:13:14.908930 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:13:15.008702 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:13:15.008896 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:13:15.010011 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:13:15.011517 | orchestrator | 2025-05-14 02:13:15.012237 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-05-14 02:13:15.013036 | orchestrator | Wednesday 14 May 2025 02:13:14 +0000 (0:00:00.327) 0:04:14.352 ********* 2025-05-14 02:13:20.798211 | orchestrator | ok: [testbed-manager] 2025-05-14 02:13:20.800505 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:13:20.800537 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:13:20.800843 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:13:20.801536 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:13:20.802306 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:13:20.802943 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:13:20.803694 | orchestrator | 2025-05-14 02:13:20.806088 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-05-14 02:13:20.806871 | orchestrator | Wednesday 14 May 2025 02:13:20 +0000 (0:00:05.787) 0:04:20.140 ********* 2025-05-14 02:13:21.267842 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:13:21.268045 | orchestrator | 2025-05-14 02:13:21.268935 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-05-14 02:13:21.271691 | orchestrator | Wednesday 14 May 2025 02:13:21 +0000 (0:00:00.470) 0:04:20.610 ********* 2025-05-14 02:13:21.348569 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-05-14 02:13:21.349238 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-05-14 02:13:21.398732 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-05-14 02:13:21.401333 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:13:21.401381 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-05-14 02:13:21.402871 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-05-14 02:13:21.442637 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:13:21.442712 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-05-14 02:13:21.443226 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-05-14 02:13:21.443697 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-05-14 02:13:21.484503 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:13:21.486586 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-05-14 02:13:21.486616 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-05-14 02:13:21.541994 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:13:21.546840 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-05-14 02:13:21.546881 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-05-14 02:13:21.630569 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:13:21.631585 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:13:21.633166 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-05-14 02:13:21.637050 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-05-14 02:13:21.637090 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:13:21.637102 | orchestrator | 2025-05-14 02:13:21.637115 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-05-14 02:13:21.637192 | orchestrator | Wednesday 14 May 2025 02:13:21 +0000 (0:00:00.365) 0:04:20.975 ********* 2025-05-14 02:13:22.045956 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:13:22.046196 | orchestrator | 2025-05-14 02:13:22.046954 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-05-14 02:13:22.047930 | orchestrator | Wednesday 14 May 2025 02:13:22 +0000 (0:00:00.414) 0:04:21.390 ********* 2025-05-14 02:13:22.130988 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-05-14 02:13:22.131102 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-05-14 02:13:22.156104 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:13:22.156188 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-05-14 02:13:22.195090 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:13:22.195189 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-05-14 02:13:22.255427 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:13:22.256170 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-05-14 02:13:22.289926 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:13:22.365249 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-05-14 02:13:22.365359 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:13:22.365371 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:13:22.365382 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-05-14 02:13:22.365440 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:13:22.366002 | orchestrator | 2025-05-14 02:13:22.366702 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-05-14 02:13:22.367449 | orchestrator | Wednesday 14 May 2025 02:13:22 +0000 (0:00:00.316) 0:04:21.707 ********* 2025-05-14 02:13:22.836131 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:13:22.838422 | orchestrator | 2025-05-14 02:13:22.838618 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-05-14 02:13:22.839038 | orchestrator | Wednesday 14 May 2025 02:13:22 +0000 (0:00:00.468) 0:04:22.175 ********* 2025-05-14 02:13:56.732668 | orchestrator | changed: [testbed-manager] 2025-05-14 02:13:56.732925 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:13:56.735383 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:13:56.735659 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:13:56.736557 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:13:56.737963 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:13:56.737992 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:13:56.738794 | orchestrator | 2025-05-14 02:13:56.738820 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-05-14 02:13:56.739323 | orchestrator | Wednesday 14 May 2025 02:13:56 +0000 (0:00:33.897) 0:04:56.073 ********* 2025-05-14 02:14:04.593365 | orchestrator | changed: [testbed-manager] 2025-05-14 02:14:04.594394 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:14:04.596567 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:14:04.597082 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:14:04.597614 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:14:04.598980 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:14:04.599234 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:14:04.600233 | orchestrator | 2025-05-14 02:14:04.601339 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-05-14 02:14:04.601395 | orchestrator | Wednesday 14 May 2025 02:14:04 +0000 (0:00:07.864) 0:05:03.937 ********* 2025-05-14 02:14:11.824607 | orchestrator | changed: [testbed-manager] 2025-05-14 02:14:11.824900 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:14:11.824976 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:14:11.825912 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:14:11.828073 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:14:11.829126 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:14:11.829979 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:14:11.830621 | orchestrator | 2025-05-14 02:14:11.831537 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-05-14 02:14:11.832568 | orchestrator | Wednesday 14 May 2025 02:14:11 +0000 (0:00:07.229) 0:05:11.167 ********* 2025-05-14 02:14:13.397801 | orchestrator | ok: [testbed-manager] 2025-05-14 02:14:13.399809 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:14:13.401046 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:14:13.402386 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:14:13.403300 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:14:13.405377 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:14:13.406698 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:14:13.408116 | orchestrator | 2025-05-14 02:14:13.408507 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-05-14 02:14:13.409314 | orchestrator | Wednesday 14 May 2025 02:14:13 +0000 (0:00:01.574) 0:05:12.742 ********* 2025-05-14 02:14:19.129739 | orchestrator | changed: [testbed-manager] 2025-05-14 02:14:19.132082 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:14:19.132120 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:14:19.133903 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:14:19.134834 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:14:19.136791 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:14:19.138594 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:14:19.140057 | orchestrator | 2025-05-14 02:14:19.140096 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-05-14 02:14:19.140532 | orchestrator | Wednesday 14 May 2025 02:14:19 +0000 (0:00:05.731) 0:05:18.473 ********* 2025-05-14 02:14:19.563485 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:14:19.563575 | orchestrator | 2025-05-14 02:14:19.563587 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-05-14 02:14:19.564393 | orchestrator | Wednesday 14 May 2025 02:14:19 +0000 (0:00:00.433) 0:05:18.907 ********* 2025-05-14 02:14:20.299239 | orchestrator | changed: [testbed-manager] 2025-05-14 02:14:20.299397 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:14:20.300997 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:14:20.302118 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:14:20.303226 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:14:20.304409 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:14:20.304917 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:14:20.306089 | orchestrator | 2025-05-14 02:14:20.307009 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-05-14 02:14:20.307479 | orchestrator | Wednesday 14 May 2025 02:14:20 +0000 (0:00:00.736) 0:05:19.643 ********* 2025-05-14 02:14:21.814302 | orchestrator | ok: [testbed-manager] 2025-05-14 02:14:21.814469 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:14:21.814522 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:14:21.815865 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:14:21.816451 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:14:21.817570 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:14:21.818258 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:14:21.819179 | orchestrator | 2025-05-14 02:14:21.819465 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-05-14 02:14:21.819871 | orchestrator | Wednesday 14 May 2025 02:14:21 +0000 (0:00:01.513) 0:05:21.156 ********* 2025-05-14 02:14:22.557089 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:14:22.557192 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:14:22.558071 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:14:22.558099 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:14:22.558208 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:14:22.560456 | orchestrator | changed: [testbed-manager] 2025-05-14 02:14:22.561187 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:14:22.561285 | orchestrator | 2025-05-14 02:14:22.561998 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-05-14 02:14:22.562736 | orchestrator | Wednesday 14 May 2025 02:14:22 +0000 (0:00:00.744) 0:05:21.901 ********* 2025-05-14 02:14:22.646206 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:14:22.672868 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:14:22.696903 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:14:22.725311 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:14:22.778629 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:14:22.779866 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:14:22.781558 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:14:22.782244 | orchestrator | 2025-05-14 02:14:22.783003 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-05-14 02:14:22.784325 | orchestrator | Wednesday 14 May 2025 02:14:22 +0000 (0:00:00.223) 0:05:22.125 ********* 2025-05-14 02:14:22.861560 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:14:22.890824 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:14:22.917346 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:14:22.943085 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:14:23.094356 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:14:23.095030 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:14:23.095257 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:14:23.099110 | orchestrator | 2025-05-14 02:14:23.099929 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-05-14 02:14:23.100243 | orchestrator | Wednesday 14 May 2025 02:14:23 +0000 (0:00:00.316) 0:05:22.441 ********* 2025-05-14 02:14:23.191771 | orchestrator | ok: [testbed-manager] 2025-05-14 02:14:23.223351 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:14:23.273220 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:14:23.302300 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:14:23.380332 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:14:23.380427 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:14:23.381934 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:14:23.385064 | orchestrator | 2025-05-14 02:14:23.385700 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-05-14 02:14:23.386334 | orchestrator | Wednesday 14 May 2025 02:14:23 +0000 (0:00:00.285) 0:05:22.726 ********* 2025-05-14 02:14:23.475467 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:14:23.496538 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:14:23.522988 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:14:23.554331 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:14:23.602929 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:14:23.603993 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:14:23.604106 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:14:23.605146 | orchestrator | 2025-05-14 02:14:23.606150 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-05-14 02:14:23.608942 | orchestrator | Wednesday 14 May 2025 02:14:23 +0000 (0:00:00.222) 0:05:22.949 ********* 2025-05-14 02:14:23.700155 | orchestrator | ok: [testbed-manager] 2025-05-14 02:14:23.738094 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:14:23.759574 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:14:23.795141 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:14:23.848798 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:14:23.848943 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:14:23.848994 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:14:23.849458 | orchestrator | 2025-05-14 02:14:23.849848 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-05-14 02:14:23.850195 | orchestrator | Wednesday 14 May 2025 02:14:23 +0000 (0:00:00.246) 0:05:23.196 ********* 2025-05-14 02:14:23.917611 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:14:23.943315 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:14:23.969616 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:14:24.047119 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:14:24.101177 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:14:24.101664 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:14:24.102362 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:14:24.103135 | orchestrator | 2025-05-14 02:14:24.103851 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-05-14 02:14:24.104488 | orchestrator | Wednesday 14 May 2025 02:14:24 +0000 (0:00:00.252) 0:05:23.448 ********* 2025-05-14 02:14:24.169245 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:14:24.198072 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:14:24.259088 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:14:24.288818 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:14:24.334681 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:14:24.334808 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:14:24.335535 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:14:24.335942 | orchestrator | 2025-05-14 02:14:24.337114 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-05-14 02:14:24.337490 | orchestrator | Wednesday 14 May 2025 02:14:24 +0000 (0:00:00.233) 0:05:23.681 ********* 2025-05-14 02:14:24.768419 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:14:24.769763 | orchestrator | 2025-05-14 02:14:24.772959 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-05-14 02:14:24.774236 | orchestrator | Wednesday 14 May 2025 02:14:24 +0000 (0:00:00.432) 0:05:24.114 ********* 2025-05-14 02:14:25.593336 | orchestrator | ok: [testbed-manager] 2025-05-14 02:14:25.594616 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:14:25.594871 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:14:25.595798 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:14:25.596245 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:14:25.596642 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:14:25.598154 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:14:25.598197 | orchestrator | 2025-05-14 02:14:25.598573 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-05-14 02:14:25.599154 | orchestrator | Wednesday 14 May 2025 02:14:25 +0000 (0:00:00.822) 0:05:24.936 ********* 2025-05-14 02:14:28.362623 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:14:28.365387 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:14:28.366821 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:14:28.367725 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:14:28.369645 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:14:28.369954 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:14:28.371020 | orchestrator | ok: [testbed-manager] 2025-05-14 02:14:28.371717 | orchestrator | 2025-05-14 02:14:28.372488 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-05-14 02:14:28.373352 | orchestrator | Wednesday 14 May 2025 02:14:28 +0000 (0:00:02.770) 0:05:27.707 ********* 2025-05-14 02:14:28.479026 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-05-14 02:14:28.479333 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-05-14 02:14:28.482092 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-05-14 02:14:28.554441 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-05-14 02:14:28.557340 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-05-14 02:14:28.625129 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:14:28.625881 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-05-14 02:14:28.626615 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-05-14 02:14:28.627972 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-05-14 02:14:28.630831 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-05-14 02:14:28.730295 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:14:28.731872 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-05-14 02:14:28.732471 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-05-14 02:14:28.733452 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-05-14 02:14:28.805287 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:14:28.808711 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-05-14 02:14:28.810324 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-05-14 02:14:28.810971 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-05-14 02:14:28.877414 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:14:28.878978 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-05-14 02:14:28.880045 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-05-14 02:14:28.881188 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-05-14 02:14:29.014533 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:14:29.015103 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:14:29.016710 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-05-14 02:14:29.018149 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-05-14 02:14:29.018941 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-05-14 02:14:29.020365 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:14:29.020865 | orchestrator | 2025-05-14 02:14:29.022093 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-05-14 02:14:29.023100 | orchestrator | Wednesday 14 May 2025 02:14:29 +0000 (0:00:00.653) 0:05:28.361 ********* 2025-05-14 02:14:34.976613 | orchestrator | ok: [testbed-manager] 2025-05-14 02:14:34.976825 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:14:34.977170 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:14:34.978150 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:14:34.980942 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:14:34.981093 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:14:34.981438 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:14:34.982504 | orchestrator | 2025-05-14 02:14:34.983135 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-05-14 02:14:34.984277 | orchestrator | Wednesday 14 May 2025 02:14:34 +0000 (0:00:05.957) 0:05:34.319 ********* 2025-05-14 02:14:36.010363 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:14:36.011344 | orchestrator | ok: [testbed-manager] 2025-05-14 02:14:36.013100 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:14:36.023160 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:14:36.023182 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:14:36.024966 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:14:36.025856 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:14:36.026930 | orchestrator | 2025-05-14 02:14:36.028321 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-05-14 02:14:36.030311 | orchestrator | Wednesday 14 May 2025 02:14:36 +0000 (0:00:01.034) 0:05:35.353 ********* 2025-05-14 02:14:43.271713 | orchestrator | ok: [testbed-manager] 2025-05-14 02:14:43.271970 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:14:43.272617 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:14:43.273814 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:14:43.275776 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:14:43.277651 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:14:43.278605 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:14:43.279996 | orchestrator | 2025-05-14 02:14:43.280016 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-05-14 02:14:43.280789 | orchestrator | Wednesday 14 May 2025 02:14:43 +0000 (0:00:07.261) 0:05:42.615 ********* 2025-05-14 02:14:46.425204 | orchestrator | changed: [testbed-manager] 2025-05-14 02:14:46.426135 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:14:46.426185 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:14:46.427021 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:14:46.427459 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:14:46.428771 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:14:46.429250 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:14:46.429993 | orchestrator | 2025-05-14 02:14:46.430394 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-05-14 02:14:46.432179 | orchestrator | Wednesday 14 May 2025 02:14:46 +0000 (0:00:03.153) 0:05:45.769 ********* 2025-05-14 02:14:47.700159 | orchestrator | ok: [testbed-manager] 2025-05-14 02:14:47.701120 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:14:47.701967 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:14:47.703108 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:14:47.703630 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:14:47.704477 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:14:47.705179 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:14:47.705558 | orchestrator | 2025-05-14 02:14:47.705888 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-05-14 02:14:47.706527 | orchestrator | Wednesday 14 May 2025 02:14:47 +0000 (0:00:01.275) 0:05:47.045 ********* 2025-05-14 02:14:49.250172 | orchestrator | ok: [testbed-manager] 2025-05-14 02:14:49.250374 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:14:49.251593 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:14:49.252531 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:14:49.253756 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:14:49.254152 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:14:49.254920 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:14:49.255643 | orchestrator | 2025-05-14 02:14:49.255971 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-05-14 02:14:49.256922 | orchestrator | Wednesday 14 May 2025 02:14:49 +0000 (0:00:01.547) 0:05:48.592 ********* 2025-05-14 02:14:49.536118 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:14:49.600177 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:14:49.667921 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:14:49.820875 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:14:49.824622 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:14:49.824656 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:14:49.824668 | orchestrator | changed: [testbed-manager] 2025-05-14 02:14:49.825262 | orchestrator | 2025-05-14 02:14:49.825857 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-05-14 02:14:49.826162 | orchestrator | Wednesday 14 May 2025 02:14:49 +0000 (0:00:00.573) 0:05:49.165 ********* 2025-05-14 02:14:59.098384 | orchestrator | ok: [testbed-manager] 2025-05-14 02:14:59.099339 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:14:59.101531 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:14:59.102348 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:14:59.103078 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:14:59.103712 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:14:59.104230 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:14:59.104679 | orchestrator | 2025-05-14 02:14:59.105152 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-05-14 02:14:59.105964 | orchestrator | Wednesday 14 May 2025 02:14:59 +0000 (0:00:09.276) 0:05:58.441 ********* 2025-05-14 02:15:00.001588 | orchestrator | changed: [testbed-manager] 2025-05-14 02:15:00.001688 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:15:00.002212 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:15:00.003005 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:15:00.003791 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:15:00.005415 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:15:00.006556 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:15:00.006623 | orchestrator | 2025-05-14 02:15:00.007245 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-05-14 02:15:00.008270 | orchestrator | Wednesday 14 May 2025 02:14:59 +0000 (0:00:00.905) 0:05:59.346 ********* 2025-05-14 02:15:12.650381 | orchestrator | ok: [testbed-manager] 2025-05-14 02:15:12.650558 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:15:12.650996 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:15:12.651282 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:15:12.653900 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:15:12.654299 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:15:12.655598 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:15:12.656877 | orchestrator | 2025-05-14 02:15:12.656898 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-05-14 02:15:12.657146 | orchestrator | Wednesday 14 May 2025 02:15:12 +0000 (0:00:12.644) 0:06:11.991 ********* 2025-05-14 02:15:24.972607 | orchestrator | ok: [testbed-manager] 2025-05-14 02:15:24.972786 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:15:24.972809 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:15:24.972821 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:15:24.973075 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:15:24.974535 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:15:24.975309 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:15:24.975635 | orchestrator | 2025-05-14 02:15:24.976952 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-05-14 02:15:24.977654 | orchestrator | Wednesday 14 May 2025 02:15:24 +0000 (0:00:12.322) 0:06:24.313 ********* 2025-05-14 02:15:25.420716 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-05-14 02:15:26.193974 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-05-14 02:15:26.194307 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-05-14 02:15:26.194393 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-05-14 02:15:26.194900 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-05-14 02:15:26.195675 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-05-14 02:15:26.197006 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-05-14 02:15:26.197626 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-05-14 02:15:26.198508 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-05-14 02:15:26.199274 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-05-14 02:15:26.199668 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-05-14 02:15:26.200384 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-05-14 02:15:26.200995 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-05-14 02:15:26.201366 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-05-14 02:15:26.201855 | orchestrator | 2025-05-14 02:15:26.202261 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-05-14 02:15:26.202837 | orchestrator | Wednesday 14 May 2025 02:15:26 +0000 (0:00:01.224) 0:06:25.538 ********* 2025-05-14 02:15:26.326571 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:15:26.388452 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:15:26.458207 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:15:26.523410 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:15:26.589375 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:15:26.729349 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:15:26.729952 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:15:26.730541 | orchestrator | 2025-05-14 02:15:26.731395 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-05-14 02:15:26.732485 | orchestrator | Wednesday 14 May 2025 02:15:26 +0000 (0:00:00.536) 0:06:26.074 ********* 2025-05-14 02:15:30.571439 | orchestrator | ok: [testbed-manager] 2025-05-14 02:15:30.572882 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:15:30.574529 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:15:30.576039 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:15:30.577070 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:15:30.577844 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:15:30.579352 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:15:30.583987 | orchestrator | 2025-05-14 02:15:30.584016 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-05-14 02:15:30.584030 | orchestrator | Wednesday 14 May 2025 02:15:30 +0000 (0:00:03.834) 0:06:29.908 ********* 2025-05-14 02:15:30.747801 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:15:30.826076 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:15:30.895265 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:15:31.160686 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:15:31.230934 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:15:31.337291 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:15:31.340034 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:15:31.340933 | orchestrator | 2025-05-14 02:15:31.342392 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-05-14 02:15:31.343330 | orchestrator | Wednesday 14 May 2025 02:15:31 +0000 (0:00:00.767) 0:06:30.676 ********* 2025-05-14 02:15:31.415312 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-05-14 02:15:31.415713 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-05-14 02:15:31.490224 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:15:31.490292 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-05-14 02:15:31.493816 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-05-14 02:15:31.556636 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:15:31.557355 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-05-14 02:15:31.558272 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-05-14 02:15:31.639935 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:15:31.641005 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-05-14 02:15:31.644057 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-05-14 02:15:31.713009 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:15:31.713530 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-05-14 02:15:31.715020 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-05-14 02:15:31.791201 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:15:31.791991 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-05-14 02:15:31.792902 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-05-14 02:15:31.929459 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:15:31.930112 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-05-14 02:15:31.931795 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-05-14 02:15:31.933121 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:15:31.935021 | orchestrator | 2025-05-14 02:15:31.935049 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-05-14 02:15:31.935643 | orchestrator | Wednesday 14 May 2025 02:15:31 +0000 (0:00:00.598) 0:06:31.274 ********* 2025-05-14 02:15:32.069179 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:15:32.152250 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:15:32.221919 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:15:32.291462 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:15:32.364674 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:15:32.463338 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:15:32.464487 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:15:32.466115 | orchestrator | 2025-05-14 02:15:32.468956 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-05-14 02:15:32.468982 | orchestrator | Wednesday 14 May 2025 02:15:32 +0000 (0:00:00.532) 0:06:31.806 ********* 2025-05-14 02:15:32.620869 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:15:32.696473 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:15:32.766556 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:15:32.848469 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:15:32.921711 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:15:33.038708 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:15:33.039268 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:15:33.041329 | orchestrator | 2025-05-14 02:15:33.042538 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-05-14 02:15:33.043510 | orchestrator | Wednesday 14 May 2025 02:15:33 +0000 (0:00:00.574) 0:06:32.381 ********* 2025-05-14 02:15:33.203345 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:15:33.284264 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:15:33.372290 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:15:33.447412 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:15:33.699082 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:15:33.699245 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:15:33.700625 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:15:33.701551 | orchestrator | 2025-05-14 02:15:33.702376 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-05-14 02:15:33.703445 | orchestrator | Wednesday 14 May 2025 02:15:33 +0000 (0:00:00.660) 0:06:33.041 ********* 2025-05-14 02:15:39.921905 | orchestrator | ok: [testbed-manager] 2025-05-14 02:15:39.922556 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:15:39.925424 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:15:39.925495 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:15:39.927260 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:15:39.928375 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:15:39.929126 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:15:39.930011 | orchestrator | 2025-05-14 02:15:39.930954 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-05-14 02:15:39.931086 | orchestrator | Wednesday 14 May 2025 02:15:39 +0000 (0:00:06.222) 0:06:39.264 ********* 2025-05-14 02:15:40.809111 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:15:40.809224 | orchestrator | 2025-05-14 02:15:40.809420 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-05-14 02:15:40.811110 | orchestrator | Wednesday 14 May 2025 02:15:40 +0000 (0:00:00.889) 0:06:40.153 ********* 2025-05-14 02:15:41.651029 | orchestrator | ok: [testbed-manager] 2025-05-14 02:15:41.651270 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:15:41.652258 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:15:41.653079 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:15:41.653945 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:15:41.654937 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:15:41.655889 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:15:41.656132 | orchestrator | 2025-05-14 02:15:41.657091 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-05-14 02:15:41.658189 | orchestrator | Wednesday 14 May 2025 02:15:41 +0000 (0:00:00.840) 0:06:40.993 ********* 2025-05-14 02:15:42.115302 | orchestrator | ok: [testbed-manager] 2025-05-14 02:15:42.553414 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:15:42.553517 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:15:42.553772 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:15:42.554197 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:15:42.556310 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:15:42.557364 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:15:42.558588 | orchestrator | 2025-05-14 02:15:42.559366 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-05-14 02:15:42.559931 | orchestrator | Wednesday 14 May 2025 02:15:42 +0000 (0:00:00.904) 0:06:41.898 ********* 2025-05-14 02:15:44.122940 | orchestrator | ok: [testbed-manager] 2025-05-14 02:15:44.123057 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:15:44.123952 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:15:44.124983 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:15:44.124997 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:15:44.126275 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:15:44.127272 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:15:44.127753 | orchestrator | 2025-05-14 02:15:44.128462 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-05-14 02:15:44.128979 | orchestrator | Wednesday 14 May 2025 02:15:44 +0000 (0:00:01.569) 0:06:43.467 ********* 2025-05-14 02:15:44.248551 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:15:45.477890 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:15:45.478288 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:15:45.479419 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:15:45.480676 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:15:45.481549 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:15:45.482473 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:15:45.482900 | orchestrator | 2025-05-14 02:15:45.483631 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-05-14 02:15:45.484272 | orchestrator | Wednesday 14 May 2025 02:15:45 +0000 (0:00:01.352) 0:06:44.820 ********* 2025-05-14 02:15:46.840515 | orchestrator | ok: [testbed-manager] 2025-05-14 02:15:46.841083 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:15:46.841968 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:15:46.842504 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:15:46.843256 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:15:46.844088 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:15:46.844814 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:15:46.845104 | orchestrator | 2025-05-14 02:15:46.845670 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-05-14 02:15:46.845962 | orchestrator | Wednesday 14 May 2025 02:15:46 +0000 (0:00:01.362) 0:06:46.183 ********* 2025-05-14 02:15:48.249837 | orchestrator | changed: [testbed-manager] 2025-05-14 02:15:48.250008 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:15:48.250808 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:15:48.251706 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:15:48.252378 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:15:48.254168 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:15:48.254192 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:15:48.254205 | orchestrator | 2025-05-14 02:15:48.255383 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-05-14 02:15:48.255711 | orchestrator | Wednesday 14 May 2025 02:15:48 +0000 (0:00:01.410) 0:06:47.593 ********* 2025-05-14 02:15:49.442530 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:15:49.442634 | orchestrator | 2025-05-14 02:15:49.443664 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-05-14 02:15:49.444501 | orchestrator | Wednesday 14 May 2025 02:15:49 +0000 (0:00:01.190) 0:06:48.784 ********* 2025-05-14 02:15:50.865102 | orchestrator | ok: [testbed-manager] 2025-05-14 02:15:50.865289 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:15:50.866223 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:15:50.867154 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:15:50.867519 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:15:50.868221 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:15:50.869826 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:15:50.869914 | orchestrator | 2025-05-14 02:15:50.869933 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-05-14 02:15:50.869946 | orchestrator | Wednesday 14 May 2025 02:15:50 +0000 (0:00:01.421) 0:06:50.206 ********* 2025-05-14 02:15:52.080648 | orchestrator | ok: [testbed-manager] 2025-05-14 02:15:52.080796 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:15:52.080972 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:15:52.081946 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:15:52.083053 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:15:52.083848 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:15:52.084297 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:15:52.085061 | orchestrator | 2025-05-14 02:15:52.085488 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-05-14 02:15:52.086316 | orchestrator | Wednesday 14 May 2025 02:15:52 +0000 (0:00:01.215) 0:06:51.422 ********* 2025-05-14 02:15:53.286459 | orchestrator | ok: [testbed-manager] 2025-05-14 02:15:53.287935 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:15:53.288961 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:15:53.290221 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:15:53.291140 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:15:53.292077 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:15:53.293291 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:15:53.293993 | orchestrator | 2025-05-14 02:15:53.295054 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-05-14 02:15:53.296077 | orchestrator | Wednesday 14 May 2025 02:15:53 +0000 (0:00:01.208) 0:06:52.630 ********* 2025-05-14 02:15:54.007260 | orchestrator | ok: [testbed-manager] 2025-05-14 02:15:54.730407 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:15:54.731200 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:15:54.733368 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:15:54.734933 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:15:54.736344 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:15:54.737414 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:15:54.738435 | orchestrator | 2025-05-14 02:15:54.739452 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-05-14 02:15:54.740239 | orchestrator | Wednesday 14 May 2025 02:15:54 +0000 (0:00:01.441) 0:06:54.072 ********* 2025-05-14 02:15:55.981841 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:15:55.983049 | orchestrator | 2025-05-14 02:15:55.985864 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-14 02:15:55.985896 | orchestrator | Wednesday 14 May 2025 02:15:55 +0000 (0:00:00.951) 0:06:55.023 ********* 2025-05-14 02:15:55.987268 | orchestrator | 2025-05-14 02:15:55.988757 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-14 02:15:55.989078 | orchestrator | Wednesday 14 May 2025 02:15:55 +0000 (0:00:00.041) 0:06:55.064 ********* 2025-05-14 02:15:55.990419 | orchestrator | 2025-05-14 02:15:55.991576 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-14 02:15:55.992327 | orchestrator | Wednesday 14 May 2025 02:15:55 +0000 (0:00:00.047) 0:06:55.112 ********* 2025-05-14 02:15:55.994769 | orchestrator | 2025-05-14 02:15:55.994797 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-14 02:15:55.994809 | orchestrator | Wednesday 14 May 2025 02:15:55 +0000 (0:00:00.039) 0:06:55.152 ********* 2025-05-14 02:15:55.995120 | orchestrator | 2025-05-14 02:15:55.995985 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-14 02:15:55.996804 | orchestrator | Wednesday 14 May 2025 02:15:55 +0000 (0:00:00.039) 0:06:55.191 ********* 2025-05-14 02:15:55.997652 | orchestrator | 2025-05-14 02:15:55.998497 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-14 02:15:55.998907 | orchestrator | Wednesday 14 May 2025 02:15:55 +0000 (0:00:00.047) 0:06:55.238 ********* 2025-05-14 02:15:56.000139 | orchestrator | 2025-05-14 02:15:56.000848 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-14 02:15:56.001065 | orchestrator | Wednesday 14 May 2025 02:15:55 +0000 (0:00:00.047) 0:06:55.286 ********* 2025-05-14 02:15:56.002233 | orchestrator | 2025-05-14 02:15:56.002958 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-14 02:15:56.003585 | orchestrator | Wednesday 14 May 2025 02:15:55 +0000 (0:00:00.039) 0:06:55.326 ********* 2025-05-14 02:15:57.068586 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:15:57.069537 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:15:57.069572 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:15:57.069588 | orchestrator | 2025-05-14 02:15:57.069602 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-05-14 02:15:57.069631 | orchestrator | Wednesday 14 May 2025 02:15:57 +0000 (0:00:01.083) 0:06:56.409 ********* 2025-05-14 02:15:58.589108 | orchestrator | changed: [testbed-manager] 2025-05-14 02:15:58.589210 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:15:58.589713 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:15:58.591006 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:15:58.592175 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:15:58.593840 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:15:58.594813 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:15:58.595438 | orchestrator | 2025-05-14 02:15:58.595953 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-05-14 02:15:58.596688 | orchestrator | Wednesday 14 May 2025 02:15:58 +0000 (0:00:01.521) 0:06:57.931 ********* 2025-05-14 02:15:59.734428 | orchestrator | changed: [testbed-manager] 2025-05-14 02:15:59.735094 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:15:59.735993 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:15:59.736777 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:15:59.737942 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:15:59.739422 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:15:59.743385 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:15:59.744408 | orchestrator | 2025-05-14 02:15:59.745457 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-05-14 02:15:59.746397 | orchestrator | Wednesday 14 May 2025 02:15:59 +0000 (0:00:01.145) 0:06:59.076 ********* 2025-05-14 02:15:59.891576 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:16:01.832588 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:16:01.833333 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:16:01.834377 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:16:01.836235 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:16:01.836999 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:16:01.837999 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:16:01.838103 | orchestrator | 2025-05-14 02:16:01.838513 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-05-14 02:16:01.839463 | orchestrator | Wednesday 14 May 2025 02:16:01 +0000 (0:00:02.100) 0:07:01.177 ********* 2025-05-14 02:16:01.957383 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:16:01.957787 | orchestrator | 2025-05-14 02:16:01.957824 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-05-14 02:16:01.958341 | orchestrator | Wednesday 14 May 2025 02:16:01 +0000 (0:00:00.122) 0:07:01.300 ********* 2025-05-14 02:16:02.947213 | orchestrator | ok: [testbed-manager] 2025-05-14 02:16:02.947379 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:16:02.948194 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:16:02.949467 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:16:02.949690 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:16:02.950177 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:16:02.951243 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:16:02.951265 | orchestrator | 2025-05-14 02:16:02.951747 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-05-14 02:16:02.951997 | orchestrator | Wednesday 14 May 2025 02:16:02 +0000 (0:00:00.989) 0:07:02.289 ********* 2025-05-14 02:16:03.106363 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:16:03.174246 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:16:03.252113 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:16:03.321259 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:16:03.383979 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:16:03.710455 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:16:03.710849 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:16:03.711952 | orchestrator | 2025-05-14 02:16:03.714092 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-05-14 02:16:03.714807 | orchestrator | Wednesday 14 May 2025 02:16:03 +0000 (0:00:00.765) 0:07:03.055 ********* 2025-05-14 02:16:04.671492 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:16:04.672360 | orchestrator | 2025-05-14 02:16:04.673067 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-05-14 02:16:04.673854 | orchestrator | Wednesday 14 May 2025 02:16:04 +0000 (0:00:00.958) 0:07:04.013 ********* 2025-05-14 02:16:05.129221 | orchestrator | ok: [testbed-manager] 2025-05-14 02:16:05.618283 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:16:05.620134 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:16:05.620203 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:16:05.621935 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:16:05.623515 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:16:05.624206 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:16:05.625489 | orchestrator | 2025-05-14 02:16:05.626784 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-05-14 02:16:05.627415 | orchestrator | Wednesday 14 May 2025 02:16:05 +0000 (0:00:00.947) 0:07:04.961 ********* 2025-05-14 02:16:08.587502 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-05-14 02:16:08.587605 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-05-14 02:16:08.587617 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-05-14 02:16:08.588227 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-05-14 02:16:08.588263 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-05-14 02:16:08.588800 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-05-14 02:16:08.589809 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-05-14 02:16:08.589841 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-05-14 02:16:08.591656 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-05-14 02:16:08.592638 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-05-14 02:16:08.593607 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-05-14 02:16:08.593700 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-05-14 02:16:08.595015 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-05-14 02:16:08.595189 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-05-14 02:16:08.595657 | orchestrator | 2025-05-14 02:16:08.597137 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-05-14 02:16:08.598079 | orchestrator | Wednesday 14 May 2025 02:16:08 +0000 (0:00:02.968) 0:07:07.930 ********* 2025-05-14 02:16:08.726755 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:16:08.809096 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:16:08.874296 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:16:08.945434 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:16:09.020628 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:16:09.122257 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:16:09.127152 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:16:09.127264 | orchestrator | 2025-05-14 02:16:09.127311 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-05-14 02:16:09.127334 | orchestrator | Wednesday 14 May 2025 02:16:09 +0000 (0:00:00.534) 0:07:08.465 ********* 2025-05-14 02:16:09.938517 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:16:09.939050 | orchestrator | 2025-05-14 02:16:09.942005 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-05-14 02:16:09.942837 | orchestrator | Wednesday 14 May 2025 02:16:09 +0000 (0:00:00.815) 0:07:09.280 ********* 2025-05-14 02:16:10.864267 | orchestrator | ok: [testbed-manager] 2025-05-14 02:16:10.864365 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:16:10.864381 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:16:10.864393 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:16:10.864405 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:16:10.865767 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:16:10.865790 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:16:10.865803 | orchestrator | 2025-05-14 02:16:10.865815 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-05-14 02:16:10.866555 | orchestrator | Wednesday 14 May 2025 02:16:10 +0000 (0:00:00.921) 0:07:10.202 ********* 2025-05-14 02:16:11.424170 | orchestrator | ok: [testbed-manager] 2025-05-14 02:16:11.500422 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:16:12.071205 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:16:12.072543 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:16:12.073159 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:16:12.073551 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:16:12.074115 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:16:12.074605 | orchestrator | 2025-05-14 02:16:12.075149 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-05-14 02:16:12.075639 | orchestrator | Wednesday 14 May 2025 02:16:12 +0000 (0:00:01.211) 0:07:11.413 ********* 2025-05-14 02:16:12.217704 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:16:12.295490 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:16:12.383637 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:16:12.463592 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:16:12.531164 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:16:12.649794 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:16:12.650968 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:16:12.652385 | orchestrator | 2025-05-14 02:16:12.653314 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-05-14 02:16:12.654116 | orchestrator | Wednesday 14 May 2025 02:16:12 +0000 (0:00:00.582) 0:07:11.995 ********* 2025-05-14 02:16:14.063236 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:16:14.064315 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:16:14.065222 | orchestrator | ok: [testbed-manager] 2025-05-14 02:16:14.066882 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:16:14.067362 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:16:14.068829 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:16:14.068875 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:16:14.069160 | orchestrator | 2025-05-14 02:16:14.069779 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-05-14 02:16:14.070250 | orchestrator | Wednesday 14 May 2025 02:16:14 +0000 (0:00:01.408) 0:07:13.404 ********* 2025-05-14 02:16:14.225766 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:16:14.298791 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:16:14.375522 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:16:14.456953 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:16:14.530884 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:16:14.645530 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:16:14.645761 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:16:14.645850 | orchestrator | 2025-05-14 02:16:14.647511 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-05-14 02:16:14.647759 | orchestrator | Wednesday 14 May 2025 02:16:14 +0000 (0:00:00.584) 0:07:13.988 ********* 2025-05-14 02:16:16.479811 | orchestrator | ok: [testbed-manager] 2025-05-14 02:16:16.480038 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:16:16.481101 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:16:16.481298 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:16:16.485013 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:16:16.485887 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:16:16.492137 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:16:16.492172 | orchestrator | 2025-05-14 02:16:16.492186 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-05-14 02:16:16.492199 | orchestrator | Wednesday 14 May 2025 02:16:16 +0000 (0:00:01.834) 0:07:15.823 ********* 2025-05-14 02:16:18.241266 | orchestrator | ok: [testbed-manager] 2025-05-14 02:16:18.241553 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:16:18.242074 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:16:18.243701 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:16:18.243999 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:16:18.248410 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:16:18.248471 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:16:18.249214 | orchestrator | 2025-05-14 02:16:18.250005 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-05-14 02:16:18.250514 | orchestrator | Wednesday 14 May 2025 02:16:18 +0000 (0:00:01.763) 0:07:17.586 ********* 2025-05-14 02:16:20.060124 | orchestrator | ok: [testbed-manager] 2025-05-14 02:16:20.060555 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:16:20.062340 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:16:20.064549 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:16:20.068946 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:16:20.069593 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:16:20.070900 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:16:20.071201 | orchestrator | 2025-05-14 02:16:20.072239 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-05-14 02:16:20.073203 | orchestrator | Wednesday 14 May 2025 02:16:20 +0000 (0:00:01.815) 0:07:19.401 ********* 2025-05-14 02:16:21.736428 | orchestrator | ok: [testbed-manager] 2025-05-14 02:16:21.736957 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:16:21.737590 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:16:21.739923 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:16:21.739947 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:16:21.741183 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:16:21.742416 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:16:21.742924 | orchestrator | 2025-05-14 02:16:21.743838 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-14 02:16:21.744639 | orchestrator | Wednesday 14 May 2025 02:16:21 +0000 (0:00:01.679) 0:07:21.080 ********* 2025-05-14 02:16:22.344394 | orchestrator | ok: [testbed-manager] 2025-05-14 02:16:22.761194 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:16:22.762831 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:16:22.765945 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:16:22.765971 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:16:22.765983 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:16:22.766185 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:16:22.766540 | orchestrator | 2025-05-14 02:16:22.767060 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-14 02:16:22.767434 | orchestrator | Wednesday 14 May 2025 02:16:22 +0000 (0:00:01.025) 0:07:22.106 ********* 2025-05-14 02:16:22.885198 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:16:22.959691 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:16:23.033020 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:16:23.116162 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:16:23.172504 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:16:23.563624 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:16:23.564438 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:16:23.566355 | orchestrator | 2025-05-14 02:16:23.570078 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-05-14 02:16:23.570137 | orchestrator | Wednesday 14 May 2025 02:16:23 +0000 (0:00:00.800) 0:07:22.906 ********* 2025-05-14 02:16:23.707693 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:16:23.786387 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:16:23.859419 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:16:23.926372 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:16:24.004119 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:16:24.122193 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:16:24.122297 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:16:24.122371 | orchestrator | 2025-05-14 02:16:24.122834 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-05-14 02:16:24.122972 | orchestrator | Wednesday 14 May 2025 02:16:24 +0000 (0:00:00.559) 0:07:23.465 ********* 2025-05-14 02:16:24.299832 | orchestrator | ok: [testbed-manager] 2025-05-14 02:16:24.371036 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:16:24.441791 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:16:24.540158 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:16:24.620607 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:16:24.757553 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:16:24.757907 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:16:24.759155 | orchestrator | 2025-05-14 02:16:24.759340 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-05-14 02:16:24.764180 | orchestrator | Wednesday 14 May 2025 02:16:24 +0000 (0:00:00.635) 0:07:24.101 ********* 2025-05-14 02:16:24.912449 | orchestrator | ok: [testbed-manager] 2025-05-14 02:16:24.988821 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:16:25.345958 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:16:25.418505 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:16:25.492213 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:16:25.635833 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:16:25.637241 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:16:25.638154 | orchestrator | 2025-05-14 02:16:25.639262 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-05-14 02:16:25.640222 | orchestrator | Wednesday 14 May 2025 02:16:25 +0000 (0:00:00.876) 0:07:24.977 ********* 2025-05-14 02:16:25.778407 | orchestrator | ok: [testbed-manager] 2025-05-14 02:16:25.861920 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:16:25.938471 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:16:26.015356 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:16:26.102851 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:16:26.238444 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:16:26.238544 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:16:26.241238 | orchestrator | 2025-05-14 02:16:26.242150 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-05-14 02:16:26.243922 | orchestrator | Wednesday 14 May 2025 02:16:26 +0000 (0:00:00.601) 0:07:25.579 ********* 2025-05-14 02:16:31.996419 | orchestrator | ok: [testbed-manager] 2025-05-14 02:16:31.996612 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:16:31.997890 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:16:31.999156 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:16:32.000375 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:16:32.001007 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:16:32.004323 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:16:32.004850 | orchestrator | 2025-05-14 02:16:32.005636 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-05-14 02:16:32.006230 | orchestrator | Wednesday 14 May 2025 02:16:31 +0000 (0:00:05.759) 0:07:31.338 ********* 2025-05-14 02:16:32.151004 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:16:32.223835 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:16:32.307053 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:16:32.386241 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:16:32.447629 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:16:32.578150 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:16:32.578501 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:16:32.579844 | orchestrator | 2025-05-14 02:16:32.581886 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-05-14 02:16:32.582314 | orchestrator | Wednesday 14 May 2025 02:16:32 +0000 (0:00:00.583) 0:07:31.922 ********* 2025-05-14 02:16:33.829269 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:16:33.829488 | orchestrator | 2025-05-14 02:16:33.830817 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-05-14 02:16:33.832095 | orchestrator | Wednesday 14 May 2025 02:16:33 +0000 (0:00:01.248) 0:07:33.171 ********* 2025-05-14 02:16:35.550544 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:16:35.551468 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:16:35.553090 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:16:35.554131 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:16:35.555372 | orchestrator | ok: [testbed-manager] 2025-05-14 02:16:35.555396 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:16:35.555783 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:16:35.556689 | orchestrator | 2025-05-14 02:16:35.557184 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-05-14 02:16:35.558111 | orchestrator | Wednesday 14 May 2025 02:16:35 +0000 (0:00:01.722) 0:07:34.893 ********* 2025-05-14 02:16:36.796420 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:16:36.800522 | orchestrator | ok: [testbed-manager] 2025-05-14 02:16:36.802207 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:16:36.802474 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:16:36.802940 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:16:36.804819 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:16:36.805132 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:16:36.805586 | orchestrator | 2025-05-14 02:16:36.806077 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-05-14 02:16:36.806443 | orchestrator | Wednesday 14 May 2025 02:16:36 +0000 (0:00:01.236) 0:07:36.130 ********* 2025-05-14 02:16:37.286849 | orchestrator | ok: [testbed-manager] 2025-05-14 02:16:37.695461 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:16:37.696581 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:16:37.697679 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:16:37.698370 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:16:37.699341 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:16:37.700200 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:16:37.702171 | orchestrator | 2025-05-14 02:16:37.702778 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-05-14 02:16:37.703842 | orchestrator | Wednesday 14 May 2025 02:16:37 +0000 (0:00:00.907) 0:07:37.037 ********* 2025-05-14 02:16:38.641347 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-14 02:16:39.931207 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-14 02:16:39.931642 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-14 02:16:39.935026 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-14 02:16:39.935064 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-14 02:16:39.935108 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-14 02:16:39.935205 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-14 02:16:39.936250 | orchestrator | 2025-05-14 02:16:39.936999 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-05-14 02:16:39.938924 | orchestrator | Wednesday 14 May 2025 02:16:39 +0000 (0:00:02.236) 0:07:39.273 ********* 2025-05-14 02:16:40.789458 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:16:40.790253 | orchestrator | 2025-05-14 02:16:40.790677 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-05-14 02:16:40.791229 | orchestrator | Wednesday 14 May 2025 02:16:40 +0000 (0:00:00.857) 0:07:40.131 ********* 2025-05-14 02:16:48.958767 | orchestrator | changed: [testbed-manager] 2025-05-14 02:16:48.959136 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:16:48.960287 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:16:48.963072 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:16:48.964604 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:16:48.965646 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:16:48.966990 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:16:48.968001 | orchestrator | 2025-05-14 02:16:48.968421 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-05-14 02:16:48.969323 | orchestrator | Wednesday 14 May 2025 02:16:48 +0000 (0:00:08.169) 0:07:48.300 ********* 2025-05-14 02:16:50.839282 | orchestrator | ok: [testbed-manager] 2025-05-14 02:16:50.839411 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:16:50.839438 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:16:50.839545 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:16:50.839956 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:16:50.840501 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:16:50.840992 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:16:50.841684 | orchestrator | 2025-05-14 02:16:50.842121 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-05-14 02:16:50.842739 | orchestrator | Wednesday 14 May 2025 02:16:50 +0000 (0:00:01.877) 0:07:50.178 ********* 2025-05-14 02:16:52.114997 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:16:52.115188 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:16:52.116046 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:16:52.116445 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:16:52.120644 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:16:52.121497 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:16:52.122217 | orchestrator | 2025-05-14 02:16:52.123119 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-05-14 02:16:52.123924 | orchestrator | Wednesday 14 May 2025 02:16:52 +0000 (0:00:01.279) 0:07:51.457 ********* 2025-05-14 02:16:53.553768 | orchestrator | changed: [testbed-manager] 2025-05-14 02:16:53.554370 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:16:53.555019 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:16:53.555542 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:16:53.556851 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:16:53.557741 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:16:53.558677 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:16:53.559929 | orchestrator | 2025-05-14 02:16:53.560908 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-05-14 02:16:53.561935 | orchestrator | 2025-05-14 02:16:53.562807 | orchestrator | TASK [Include hardening role] ************************************************** 2025-05-14 02:16:53.563918 | orchestrator | Wednesday 14 May 2025 02:16:53 +0000 (0:00:01.439) 0:07:52.897 ********* 2025-05-14 02:16:53.677073 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:16:53.749319 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:16:53.815573 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:16:53.872886 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:16:53.941969 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:16:54.056943 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:16:54.057080 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:16:54.057189 | orchestrator | 2025-05-14 02:16:54.057472 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-05-14 02:16:54.057972 | orchestrator | 2025-05-14 02:16:54.058288 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-05-14 02:16:54.059836 | orchestrator | Wednesday 14 May 2025 02:16:54 +0000 (0:00:00.504) 0:07:53.401 ********* 2025-05-14 02:16:55.428176 | orchestrator | changed: [testbed-manager] 2025-05-14 02:16:55.428275 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:16:55.428385 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:16:55.428910 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:16:55.429956 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:16:55.430567 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:16:55.431772 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:16:55.432416 | orchestrator | 2025-05-14 02:16:55.433546 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-05-14 02:16:55.433907 | orchestrator | Wednesday 14 May 2025 02:16:55 +0000 (0:00:01.368) 0:07:54.770 ********* 2025-05-14 02:16:56.865446 | orchestrator | ok: [testbed-manager] 2025-05-14 02:16:56.865879 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:16:56.865975 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:16:56.867922 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:16:56.868202 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:16:56.869925 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:16:56.871363 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:16:56.871770 | orchestrator | 2025-05-14 02:16:56.872779 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-05-14 02:16:56.873588 | orchestrator | Wednesday 14 May 2025 02:16:56 +0000 (0:00:01.440) 0:07:56.211 ********* 2025-05-14 02:16:57.002467 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:16:57.065198 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:16:57.130138 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:16:57.365668 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:16:57.428559 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:16:57.832933 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:16:57.833275 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:16:57.834247 | orchestrator | 2025-05-14 02:16:57.834458 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-05-14 02:16:57.835371 | orchestrator | Wednesday 14 May 2025 02:16:57 +0000 (0:00:00.966) 0:07:57.177 ********* 2025-05-14 02:16:59.066134 | orchestrator | changed: [testbed-manager] 2025-05-14 02:16:59.066312 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:16:59.067077 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:16:59.067607 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:16:59.068916 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:16:59.069689 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:16:59.070381 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:16:59.071245 | orchestrator | 2025-05-14 02:16:59.071932 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-05-14 02:16:59.072584 | orchestrator | 2025-05-14 02:16:59.073633 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-05-14 02:16:59.074851 | orchestrator | Wednesday 14 May 2025 02:16:59 +0000 (0:00:01.235) 0:07:58.413 ********* 2025-05-14 02:16:59.949147 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:16:59.949797 | orchestrator | 2025-05-14 02:16:59.953148 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-14 02:16:59.953274 | orchestrator | Wednesday 14 May 2025 02:16:59 +0000 (0:00:00.879) 0:07:59.292 ********* 2025-05-14 02:17:00.438949 | orchestrator | ok: [testbed-manager] 2025-05-14 02:17:00.512219 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:17:00.584492 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:17:01.029068 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:17:01.029885 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:17:01.032154 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:17:01.033042 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:17:01.034176 | orchestrator | 2025-05-14 02:17:01.035026 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-14 02:17:01.036117 | orchestrator | Wednesday 14 May 2025 02:17:01 +0000 (0:00:01.080) 0:08:00.373 ********* 2025-05-14 02:17:02.166350 | orchestrator | changed: [testbed-manager] 2025-05-14 02:17:02.167038 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:17:02.170433 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:17:02.170931 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:17:02.171332 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:17:02.171841 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:17:02.172276 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:17:02.172639 | orchestrator | 2025-05-14 02:17:02.173168 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-05-14 02:17:02.173617 | orchestrator | Wednesday 14 May 2025 02:17:02 +0000 (0:00:01.139) 0:08:01.512 ********* 2025-05-14 02:17:02.924661 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:17:02.924997 | orchestrator | 2025-05-14 02:17:02.925026 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-14 02:17:02.925796 | orchestrator | Wednesday 14 May 2025 02:17:02 +0000 (0:00:00.758) 0:08:02.270 ********* 2025-05-14 02:17:03.301021 | orchestrator | ok: [testbed-manager] 2025-05-14 02:17:03.854277 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:17:03.854490 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:17:03.855363 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:17:03.856181 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:17:03.856612 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:17:03.857141 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:17:03.857828 | orchestrator | 2025-05-14 02:17:03.858193 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-14 02:17:03.858939 | orchestrator | Wednesday 14 May 2025 02:17:03 +0000 (0:00:00.929) 0:08:03.199 ********* 2025-05-14 02:17:04.897281 | orchestrator | changed: [testbed-manager] 2025-05-14 02:17:04.897424 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:17:04.897602 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:17:04.897703 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:17:04.898620 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:17:04.899186 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:17:04.899422 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:17:04.900595 | orchestrator | 2025-05-14 02:17:04.900777 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:17:04.901414 | orchestrator | 2025-05-14 02:17:04 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 02:17:04.901685 | orchestrator | 2025-05-14 02:17:04 | INFO  | Please wait and do not abort execution. 2025-05-14 02:17:04.903228 | orchestrator | testbed-manager : ok=160  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-05-14 02:17:04.904246 | orchestrator | testbed-node-0 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-14 02:17:04.906106 | orchestrator | testbed-node-1 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-14 02:17:04.907227 | orchestrator | testbed-node-2 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-14 02:17:04.908346 | orchestrator | testbed-node-3 : ok=167  changed=62  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-05-14 02:17:04.910094 | orchestrator | testbed-node-4 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-14 02:17:04.910582 | orchestrator | testbed-node-5 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-14 02:17:04.911794 | orchestrator | 2025-05-14 02:17:04.912916 | orchestrator | Wednesday 14 May 2025 02:17:04 +0000 (0:00:01.040) 0:08:04.240 ********* 2025-05-14 02:17:04.913630 | orchestrator | =============================================================================== 2025-05-14 02:17:04.914627 | orchestrator | osism.commons.packages : Install required packages --------------------- 82.27s 2025-05-14 02:17:04.915381 | orchestrator | osism.commons.packages : Download required packages -------------------- 39.08s 2025-05-14 02:17:04.916107 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.90s 2025-05-14 02:17:04.916648 | orchestrator | osism.commons.repository : Update package cache ------------------------ 14.18s 2025-05-14 02:17:04.917501 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.41s 2025-05-14 02:17:04.918447 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.69s 2025-05-14 02:17:04.919108 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 12.64s 2025-05-14 02:17:04.919989 | orchestrator | osism.services.docker : Install docker package ------------------------- 12.32s 2025-05-14 02:17:04.920495 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.28s 2025-05-14 02:17:04.921247 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.61s 2025-05-14 02:17:04.921832 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.17s 2025-05-14 02:17:04.922785 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.86s 2025-05-14 02:17:04.923518 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.33s 2025-05-14 02:17:04.924031 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.26s 2025-05-14 02:17:04.924426 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.23s 2025-05-14 02:17:04.925062 | orchestrator | osism.services.docker : Ensure that some packages are not installed ----- 6.22s 2025-05-14 02:17:04.925564 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 5.96s 2025-05-14 02:17:04.926215 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.93s 2025-05-14 02:17:04.926490 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 5.85s 2025-05-14 02:17:04.926998 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.79s 2025-05-14 02:17:05.632804 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-05-14 02:17:05.632930 | orchestrator | + osism apply network 2025-05-14 02:17:07.828232 | orchestrator | 2025-05-14 02:17:07 | INFO  | Task 11393f93-8ccb-4237-8850-d56e82300801 (network) was prepared for execution. 2025-05-14 02:17:07.828330 | orchestrator | 2025-05-14 02:17:07 | INFO  | It takes a moment until task 11393f93-8ccb-4237-8850-d56e82300801 (network) has been started and output is visible here. 2025-05-14 02:17:11.348246 | orchestrator | 2025-05-14 02:17:11.348860 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-05-14 02:17:11.349089 | orchestrator | 2025-05-14 02:17:11.349877 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-05-14 02:17:11.352681 | orchestrator | Wednesday 14 May 2025 02:17:11 +0000 (0:00:00.242) 0:00:00.242 ********* 2025-05-14 02:17:11.500109 | orchestrator | ok: [testbed-manager] 2025-05-14 02:17:11.578212 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:17:11.655528 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:17:11.733377 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:17:11.821006 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:17:12.058926 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:17:12.059351 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:17:12.059671 | orchestrator | 2025-05-14 02:17:12.060421 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-05-14 02:17:12.060670 | orchestrator | Wednesday 14 May 2025 02:17:12 +0000 (0:00:00.711) 0:00:00.954 ********* 2025-05-14 02:17:13.285941 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:17:13.286540 | orchestrator | 2025-05-14 02:17:13.287741 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-05-14 02:17:13.288709 | orchestrator | Wednesday 14 May 2025 02:17:13 +0000 (0:00:01.223) 0:00:02.178 ********* 2025-05-14 02:17:15.149980 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:17:15.150280 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:17:15.150954 | orchestrator | ok: [testbed-manager] 2025-05-14 02:17:15.151686 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:17:15.152372 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:17:15.152944 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:17:15.154555 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:17:15.155045 | orchestrator | 2025-05-14 02:17:15.155648 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-05-14 02:17:15.156130 | orchestrator | Wednesday 14 May 2025 02:17:15 +0000 (0:00:01.864) 0:00:04.042 ********* 2025-05-14 02:17:16.819494 | orchestrator | ok: [testbed-manager] 2025-05-14 02:17:16.823683 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:17:16.823781 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:17:16.823795 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:17:16.824402 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:17:16.825283 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:17:16.826633 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:17:16.827144 | orchestrator | 2025-05-14 02:17:16.827858 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-05-14 02:17:16.828037 | orchestrator | Wednesday 14 May 2025 02:17:16 +0000 (0:00:01.669) 0:00:05.711 ********* 2025-05-14 02:17:17.311493 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-05-14 02:17:17.311695 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-05-14 02:17:17.923449 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-05-14 02:17:17.924592 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-05-14 02:17:17.928795 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-05-14 02:17:17.928831 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-05-14 02:17:17.928843 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-05-14 02:17:17.928855 | orchestrator | 2025-05-14 02:17:17.928869 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-05-14 02:17:17.929393 | orchestrator | Wednesday 14 May 2025 02:17:17 +0000 (0:00:01.105) 0:00:06.817 ********* 2025-05-14 02:17:19.651864 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-14 02:17:19.652141 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-14 02:17:19.653190 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-14 02:17:19.655792 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-14 02:17:19.658922 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-14 02:17:19.660294 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-14 02:17:19.662434 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-14 02:17:19.663373 | orchestrator | 2025-05-14 02:17:19.664173 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-05-14 02:17:19.665287 | orchestrator | Wednesday 14 May 2025 02:17:19 +0000 (0:00:01.730) 0:00:08.547 ********* 2025-05-14 02:17:21.316107 | orchestrator | changed: [testbed-manager] 2025-05-14 02:17:21.316825 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:17:21.317528 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:17:21.318915 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:17:21.319630 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:17:21.320834 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:17:21.322013 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:17:21.322472 | orchestrator | 2025-05-14 02:17:21.323573 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-05-14 02:17:21.324263 | orchestrator | Wednesday 14 May 2025 02:17:21 +0000 (0:00:01.657) 0:00:10.205 ********* 2025-05-14 02:17:21.873403 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-14 02:17:22.319686 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-14 02:17:22.320238 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-14 02:17:22.321269 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-14 02:17:22.322681 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-14 02:17:22.323864 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-14 02:17:22.324317 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-14 02:17:22.325457 | orchestrator | 2025-05-14 02:17:22.325849 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-05-14 02:17:22.326350 | orchestrator | Wednesday 14 May 2025 02:17:22 +0000 (0:00:01.010) 0:00:11.216 ********* 2025-05-14 02:17:22.755946 | orchestrator | ok: [testbed-manager] 2025-05-14 02:17:22.842320 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:17:23.431202 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:17:23.432325 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:17:23.435404 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:17:23.435438 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:17:23.435450 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:17:23.436029 | orchestrator | 2025-05-14 02:17:23.436593 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-05-14 02:17:23.437577 | orchestrator | Wednesday 14 May 2025 02:17:23 +0000 (0:00:01.107) 0:00:12.324 ********* 2025-05-14 02:17:23.590411 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:17:23.675776 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:17:23.755911 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:17:23.843307 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:17:23.927329 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:17:24.259616 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:17:24.260525 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:17:24.261990 | orchestrator | 2025-05-14 02:17:24.263047 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-05-14 02:17:24.263904 | orchestrator | Wednesday 14 May 2025 02:17:24 +0000 (0:00:00.827) 0:00:13.151 ********* 2025-05-14 02:17:26.170209 | orchestrator | ok: [testbed-manager] 2025-05-14 02:17:26.170265 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:17:26.170959 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:17:26.171291 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:17:26.171671 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:17:26.172176 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:17:26.173653 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:17:26.173890 | orchestrator | 2025-05-14 02:17:26.174164 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-05-14 02:17:26.174460 | orchestrator | Wednesday 14 May 2025 02:17:26 +0000 (0:00:01.914) 0:00:15.066 ********* 2025-05-14 02:17:27.958190 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-05-14 02:17:27.958671 | orchestrator | changed: [testbed-node-0] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-14 02:17:27.958812 | orchestrator | changed: [testbed-node-1] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-14 02:17:27.960216 | orchestrator | changed: [testbed-node-2] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-14 02:17:27.962090 | orchestrator | changed: [testbed-node-3] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-14 02:17:27.963059 | orchestrator | changed: [testbed-node-4] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-14 02:17:27.963887 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-14 02:17:27.964666 | orchestrator | changed: [testbed-node-5] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-14 02:17:27.965847 | orchestrator | 2025-05-14 02:17:27.966417 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-05-14 02:17:27.967132 | orchestrator | Wednesday 14 May 2025 02:17:27 +0000 (0:00:01.783) 0:00:16.849 ********* 2025-05-14 02:17:29.384141 | orchestrator | ok: [testbed-manager] 2025-05-14 02:17:29.385324 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:17:29.386306 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:17:29.390577 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:17:29.390696 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:17:29.390767 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:17:29.390780 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:17:29.390790 | orchestrator | 2025-05-14 02:17:29.390860 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-05-14 02:17:29.391611 | orchestrator | Wednesday 14 May 2025 02:17:29 +0000 (0:00:01.430) 0:00:18.280 ********* 2025-05-14 02:17:30.808788 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:17:30.809118 | orchestrator | 2025-05-14 02:17:30.809973 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-05-14 02:17:30.811061 | orchestrator | Wednesday 14 May 2025 02:17:30 +0000 (0:00:01.421) 0:00:19.701 ********* 2025-05-14 02:17:31.377262 | orchestrator | ok: [testbed-manager] 2025-05-14 02:17:31.797026 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:17:31.798488 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:17:31.800461 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:17:31.802167 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:17:31.803469 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:17:31.804827 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:17:31.805328 | orchestrator | 2025-05-14 02:17:31.806127 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-05-14 02:17:31.806581 | orchestrator | Wednesday 14 May 2025 02:17:31 +0000 (0:00:00.989) 0:00:20.690 ********* 2025-05-14 02:17:31.954559 | orchestrator | ok: [testbed-manager] 2025-05-14 02:17:32.036783 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:17:32.278980 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:17:32.365663 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:17:32.451135 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:17:32.610805 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:17:32.614573 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:17:32.614613 | orchestrator | 2025-05-14 02:17:32.614626 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-05-14 02:17:32.614639 | orchestrator | Wednesday 14 May 2025 02:17:32 +0000 (0:00:00.811) 0:00:21.501 ********* 2025-05-14 02:17:33.050205 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-14 02:17:33.050296 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-05-14 02:17:33.164370 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-14 02:17:33.164862 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-05-14 02:17:33.165275 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-14 02:17:33.167284 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-05-14 02:17:33.648815 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-14 02:17:33.649200 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-05-14 02:17:33.650068 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-14 02:17:33.653419 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-05-14 02:17:33.653458 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-14 02:17:33.653479 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-05-14 02:17:33.653497 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-14 02:17:33.653515 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-05-14 02:17:33.653535 | orchestrator | 2025-05-14 02:17:33.654189 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-05-14 02:17:33.654433 | orchestrator | Wednesday 14 May 2025 02:17:33 +0000 (0:00:01.043) 0:00:22.545 ********* 2025-05-14 02:17:33.978185 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:17:34.061524 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:17:34.149852 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:17:34.237085 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:17:34.333286 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:17:35.563445 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:17:35.567551 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:17:35.567612 | orchestrator | 2025-05-14 02:17:35.567685 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-05-14 02:17:35.569892 | orchestrator | Wednesday 14 May 2025 02:17:35 +0000 (0:00:01.910) 0:00:24.455 ********* 2025-05-14 02:17:35.724106 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:17:35.804580 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:17:36.084486 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:17:36.180208 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:17:36.275869 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:17:36.323020 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:17:36.323109 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:17:36.323123 | orchestrator | 2025-05-14 02:17:36.323135 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:17:36.323182 | orchestrator | 2025-05-14 02:17:36 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 02:17:36.323197 | orchestrator | 2025-05-14 02:17:36 | INFO  | Please wait and do not abort execution. 2025-05-14 02:17:36.323297 | orchestrator | testbed-manager : ok=16  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 02:17:36.324008 | orchestrator | testbed-node-0 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 02:17:36.324497 | orchestrator | testbed-node-1 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 02:17:36.325474 | orchestrator | testbed-node-2 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 02:17:36.326451 | orchestrator | testbed-node-3 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 02:17:36.327184 | orchestrator | testbed-node-4 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 02:17:36.327868 | orchestrator | testbed-node-5 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 02:17:36.328459 | orchestrator | 2025-05-14 02:17:36.329310 | orchestrator | Wednesday 14 May 2025 02:17:36 +0000 (0:00:00.761) 0:00:25.217 ********* 2025-05-14 02:17:36.330171 | orchestrator | =============================================================================== 2025-05-14 02:17:36.330490 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 1.91s 2025-05-14 02:17:36.331089 | orchestrator | osism.commons.network : Include dummy interfaces ------------------------ 1.91s 2025-05-14 02:17:36.331677 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.86s 2025-05-14 02:17:36.332128 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 1.78s 2025-05-14 02:17:36.332537 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 1.73s 2025-05-14 02:17:36.333404 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.67s 2025-05-14 02:17:36.333819 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.66s 2025-05-14 02:17:36.334171 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.43s 2025-05-14 02:17:36.334536 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.42s 2025-05-14 02:17:36.335289 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.22s 2025-05-14 02:17:36.335813 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.11s 2025-05-14 02:17:36.336029 | orchestrator | osism.commons.network : Create required directories --------------------- 1.11s 2025-05-14 02:17:36.336386 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.04s 2025-05-14 02:17:36.336800 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.01s 2025-05-14 02:17:36.337203 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.99s 2025-05-14 02:17:36.337510 | orchestrator | osism.commons.network : Copy interfaces file ---------------------------- 0.83s 2025-05-14 02:17:36.337986 | orchestrator | osism.commons.network : Set network_configured_files fact --------------- 0.81s 2025-05-14 02:17:36.339190 | orchestrator | osism.commons.network : Netplan configuration changed ------------------- 0.76s 2025-05-14 02:17:36.339215 | orchestrator | osism.commons.network : Gather variables for each operating system ------ 0.71s 2025-05-14 02:17:36.915930 | orchestrator | + osism apply wireguard 2025-05-14 02:17:38.402826 | orchestrator | 2025-05-14 02:17:38 | INFO  | Task 39a6d2ca-031a-46ce-a4eb-f9a9f83b5237 (wireguard) was prepared for execution. 2025-05-14 02:17:38.402937 | orchestrator | 2025-05-14 02:17:38 | INFO  | It takes a moment until task 39a6d2ca-031a-46ce-a4eb-f9a9f83b5237 (wireguard) has been started and output is visible here. 2025-05-14 02:17:41.433833 | orchestrator | 2025-05-14 02:17:41.435148 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-05-14 02:17:41.436215 | orchestrator | 2025-05-14 02:17:41.438674 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-05-14 02:17:41.438745 | orchestrator | Wednesday 14 May 2025 02:17:41 +0000 (0:00:00.157) 0:00:00.157 ********* 2025-05-14 02:17:42.720286 | orchestrator | ok: [testbed-manager] 2025-05-14 02:17:42.720442 | orchestrator | 2025-05-14 02:17:42.720531 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-05-14 02:17:42.721164 | orchestrator | Wednesday 14 May 2025 02:17:42 +0000 (0:00:01.287) 0:00:01.445 ********* 2025-05-14 02:17:49.169338 | orchestrator | changed: [testbed-manager] 2025-05-14 02:17:49.171871 | orchestrator | 2025-05-14 02:17:49.171914 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-05-14 02:17:49.174510 | orchestrator | Wednesday 14 May 2025 02:17:49 +0000 (0:00:06.447) 0:00:07.893 ********* 2025-05-14 02:17:49.713709 | orchestrator | changed: [testbed-manager] 2025-05-14 02:17:49.713872 | orchestrator | 2025-05-14 02:17:49.714184 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-05-14 02:17:49.716773 | orchestrator | Wednesday 14 May 2025 02:17:49 +0000 (0:00:00.546) 0:00:08.440 ********* 2025-05-14 02:17:50.218544 | orchestrator | changed: [testbed-manager] 2025-05-14 02:17:50.219005 | orchestrator | 2025-05-14 02:17:50.219189 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-05-14 02:17:50.219866 | orchestrator | Wednesday 14 May 2025 02:17:50 +0000 (0:00:00.505) 0:00:08.945 ********* 2025-05-14 02:17:50.779813 | orchestrator | ok: [testbed-manager] 2025-05-14 02:17:50.780635 | orchestrator | 2025-05-14 02:17:50.781757 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-05-14 02:17:50.783678 | orchestrator | Wednesday 14 May 2025 02:17:50 +0000 (0:00:00.561) 0:00:09.506 ********* 2025-05-14 02:17:51.338596 | orchestrator | ok: [testbed-manager] 2025-05-14 02:17:51.338871 | orchestrator | 2025-05-14 02:17:51.340949 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-05-14 02:17:51.340979 | orchestrator | Wednesday 14 May 2025 02:17:51 +0000 (0:00:00.557) 0:00:10.064 ********* 2025-05-14 02:17:51.753576 | orchestrator | ok: [testbed-manager] 2025-05-14 02:17:51.754322 | orchestrator | 2025-05-14 02:17:51.755818 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-05-14 02:17:51.756793 | orchestrator | Wednesday 14 May 2025 02:17:51 +0000 (0:00:00.416) 0:00:10.480 ********* 2025-05-14 02:17:52.945770 | orchestrator | changed: [testbed-manager] 2025-05-14 02:17:52.946449 | orchestrator | 2025-05-14 02:17:52.947548 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-05-14 02:17:52.952080 | orchestrator | Wednesday 14 May 2025 02:17:52 +0000 (0:00:01.190) 0:00:11.670 ********* 2025-05-14 02:17:53.919069 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-14 02:17:53.919188 | orchestrator | changed: [testbed-manager] 2025-05-14 02:17:53.919673 | orchestrator | 2025-05-14 02:17:53.920137 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-05-14 02:17:53.920667 | orchestrator | Wednesday 14 May 2025 02:17:53 +0000 (0:00:00.971) 0:00:12.641 ********* 2025-05-14 02:17:55.663228 | orchestrator | changed: [testbed-manager] 2025-05-14 02:17:55.664105 | orchestrator | 2025-05-14 02:17:55.664217 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-05-14 02:17:55.665661 | orchestrator | Wednesday 14 May 2025 02:17:55 +0000 (0:00:01.743) 0:00:14.385 ********* 2025-05-14 02:17:56.600635 | orchestrator | changed: [testbed-manager] 2025-05-14 02:17:56.600865 | orchestrator | 2025-05-14 02:17:56.600966 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:17:56.601130 | orchestrator | 2025-05-14 02:17:56 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 02:17:56.601156 | orchestrator | 2025-05-14 02:17:56 | INFO  | Please wait and do not abort execution. 2025-05-14 02:17:56.601899 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:17:56.603879 | orchestrator | 2025-05-14 02:17:56.603913 | orchestrator | Wednesday 14 May 2025 02:17:56 +0000 (0:00:00.940) 0:00:15.325 ********* 2025-05-14 02:17:56.603926 | orchestrator | =============================================================================== 2025-05-14 02:17:56.603937 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.45s 2025-05-14 02:17:56.604313 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.74s 2025-05-14 02:17:56.604653 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.29s 2025-05-14 02:17:56.604936 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.19s 2025-05-14 02:17:56.605160 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.97s 2025-05-14 02:17:56.605661 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.94s 2025-05-14 02:17:56.606470 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.56s 2025-05-14 02:17:56.608040 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.56s 2025-05-14 02:17:56.609200 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.55s 2025-05-14 02:17:56.610215 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.51s 2025-05-14 02:17:56.611289 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.42s 2025-05-14 02:17:57.186290 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-05-14 02:17:57.222626 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-05-14 02:17:57.222675 | orchestrator | Dload Upload Total Spent Left Speed 2025-05-14 02:17:57.296985 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 188 0 --:--:-- --:--:-- --:--:-- 189 2025-05-14 02:17:57.310302 | orchestrator | + osism apply --environment custom workarounds 2025-05-14 02:17:58.735591 | orchestrator | 2025-05-14 02:17:58 | INFO  | Trying to run play workarounds in environment custom 2025-05-14 02:17:58.783700 | orchestrator | 2025-05-14 02:17:58 | INFO  | Task df01c34f-9526-4766-8fbc-4d347d3f3ca4 (workarounds) was prepared for execution. 2025-05-14 02:17:58.783787 | orchestrator | 2025-05-14 02:17:58 | INFO  | It takes a moment until task df01c34f-9526-4766-8fbc-4d347d3f3ca4 (workarounds) has been started and output is visible here. 2025-05-14 02:18:01.787925 | orchestrator | 2025-05-14 02:18:01.792240 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:18:01.795665 | orchestrator | 2025-05-14 02:18:01.797444 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-05-14 02:18:01.797476 | orchestrator | Wednesday 14 May 2025 02:18:01 +0000 (0:00:00.139) 0:00:00.139 ********* 2025-05-14 02:18:01.955532 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-05-14 02:18:02.039273 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-05-14 02:18:02.133267 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-05-14 02:18:02.223680 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-05-14 02:18:02.307535 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-05-14 02:18:02.515649 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-05-14 02:18:02.516579 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-05-14 02:18:02.517876 | orchestrator | 2025-05-14 02:18:02.517979 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-05-14 02:18:02.518582 | orchestrator | 2025-05-14 02:18:02.519355 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-05-14 02:18:02.519884 | orchestrator | Wednesday 14 May 2025 02:18:02 +0000 (0:00:00.732) 0:00:00.871 ********* 2025-05-14 02:18:05.162508 | orchestrator | ok: [testbed-manager] 2025-05-14 02:18:05.162807 | orchestrator | 2025-05-14 02:18:05.163234 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-05-14 02:18:05.163547 | orchestrator | 2025-05-14 02:18:05.163919 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-05-14 02:18:05.164242 | orchestrator | Wednesday 14 May 2025 02:18:05 +0000 (0:00:02.641) 0:00:03.513 ********* 2025-05-14 02:18:07.116591 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:18:07.116867 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:18:07.116889 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:18:07.117042 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:18:07.118422 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:18:07.118809 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:18:07.119049 | orchestrator | 2025-05-14 02:18:07.119573 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-05-14 02:18:07.119860 | orchestrator | 2025-05-14 02:18:07.120506 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-05-14 02:18:07.120676 | orchestrator | Wednesday 14 May 2025 02:18:07 +0000 (0:00:01.956) 0:00:05.470 ********* 2025-05-14 02:18:08.490970 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-14 02:18:08.491105 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-14 02:18:08.493519 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-14 02:18:08.493545 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-14 02:18:08.493557 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-14 02:18:08.493988 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-14 02:18:08.494833 | orchestrator | 2025-05-14 02:18:08.495275 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-05-14 02:18:08.495777 | orchestrator | Wednesday 14 May 2025 02:18:08 +0000 (0:00:01.369) 0:00:06.839 ********* 2025-05-14 02:18:12.352114 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:18:12.352582 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:18:12.354225 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:18:12.355223 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:18:12.355833 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:18:12.356319 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:18:12.356770 | orchestrator | 2025-05-14 02:18:12.357209 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-05-14 02:18:12.357709 | orchestrator | Wednesday 14 May 2025 02:18:12 +0000 (0:00:03.866) 0:00:10.705 ********* 2025-05-14 02:18:12.500154 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:18:12.579016 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:18:12.659217 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:18:12.908515 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:18:13.086891 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:18:13.087068 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:18:13.087978 | orchestrator | 2025-05-14 02:18:13.088327 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-05-14 02:18:13.088657 | orchestrator | 2025-05-14 02:18:13.089091 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-05-14 02:18:13.089475 | orchestrator | Wednesday 14 May 2025 02:18:13 +0000 (0:00:00.732) 0:00:11.438 ********* 2025-05-14 02:18:14.657175 | orchestrator | changed: [testbed-manager] 2025-05-14 02:18:14.657306 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:18:14.657946 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:18:14.658896 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:18:14.659700 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:18:14.659830 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:18:14.660820 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:18:14.661211 | orchestrator | 2025-05-14 02:18:14.662008 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-05-14 02:18:14.662319 | orchestrator | Wednesday 14 May 2025 02:18:14 +0000 (0:00:01.571) 0:00:13.010 ********* 2025-05-14 02:18:16.244707 | orchestrator | changed: [testbed-manager] 2025-05-14 02:18:16.244871 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:18:16.245622 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:18:16.247055 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:18:16.247078 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:18:16.249456 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:18:16.249883 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:18:16.250451 | orchestrator | 2025-05-14 02:18:16.251060 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-05-14 02:18:16.251645 | orchestrator | Wednesday 14 May 2025 02:18:16 +0000 (0:00:01.584) 0:00:14.595 ********* 2025-05-14 02:18:17.698543 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:18:17.698803 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:18:17.699925 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:18:17.700604 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:18:17.702171 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:18:17.703193 | orchestrator | ok: [testbed-manager] 2025-05-14 02:18:17.704193 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:18:17.705314 | orchestrator | 2025-05-14 02:18:17.706072 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-05-14 02:18:17.707046 | orchestrator | Wednesday 14 May 2025 02:18:17 +0000 (0:00:01.456) 0:00:16.051 ********* 2025-05-14 02:18:19.475135 | orchestrator | changed: [testbed-manager] 2025-05-14 02:18:19.475275 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:18:19.475390 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:18:19.475856 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:18:19.477385 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:18:19.478081 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:18:19.478337 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:18:19.478943 | orchestrator | 2025-05-14 02:18:19.479585 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-05-14 02:18:19.479754 | orchestrator | Wednesday 14 May 2025 02:18:19 +0000 (0:00:01.777) 0:00:17.828 ********* 2025-05-14 02:18:19.628524 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:18:19.698324 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:18:19.767517 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:18:19.836678 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:18:20.018128 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:18:20.151140 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:18:20.151739 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:18:20.152069 | orchestrator | 2025-05-14 02:18:20.152279 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-05-14 02:18:20.153024 | orchestrator | 2025-05-14 02:18:20.153319 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-05-14 02:18:20.153739 | orchestrator | Wednesday 14 May 2025 02:18:20 +0000 (0:00:00.678) 0:00:18.507 ********* 2025-05-14 02:18:22.542301 | orchestrator | ok: [testbed-manager] 2025-05-14 02:18:22.543054 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:18:22.545113 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:18:22.545526 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:18:22.546289 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:18:22.547081 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:18:22.547916 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:18:22.548304 | orchestrator | 2025-05-14 02:18:22.549224 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:18:22.549842 | orchestrator | 2025-05-14 02:18:22 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 02:18:22.550154 | orchestrator | 2025-05-14 02:18:22 | INFO  | Please wait and do not abort execution. 2025-05-14 02:18:22.551491 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 02:18:22.554194 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:18:22.555090 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:18:22.557663 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:18:22.557992 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:18:22.559054 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:18:22.562154 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:18:22.563807 | orchestrator | 2025-05-14 02:18:22.563838 | orchestrator | Wednesday 14 May 2025 02:18:22 +0000 (0:00:02.387) 0:00:20.894 ********* 2025-05-14 02:18:22.564744 | orchestrator | =============================================================================== 2025-05-14 02:18:22.564874 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.87s 2025-05-14 02:18:22.564889 | orchestrator | Apply netplan configuration --------------------------------------------- 2.64s 2025-05-14 02:18:22.564897 | orchestrator | Install python3-docker -------------------------------------------------- 2.39s 2025-05-14 02:18:22.565313 | orchestrator | Apply netplan configuration --------------------------------------------- 1.96s 2025-05-14 02:18:22.565377 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.78s 2025-05-14 02:18:22.565772 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.58s 2025-05-14 02:18:22.566156 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.57s 2025-05-14 02:18:22.567268 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.46s 2025-05-14 02:18:22.567291 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.37s 2025-05-14 02:18:22.567299 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.73s 2025-05-14 02:18:22.567773 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.73s 2025-05-14 02:18:22.567912 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.68s 2025-05-14 02:18:22.936113 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-05-14 02:18:24.269882 | orchestrator | 2025-05-14 02:18:24 | INFO  | Task 7e08d0ec-2560-413c-9e2f-46068f4d4f56 (reboot) was prepared for execution. 2025-05-14 02:18:24.269976 | orchestrator | 2025-05-14 02:18:24 | INFO  | It takes a moment until task 7e08d0ec-2560-413c-9e2f-46068f4d4f56 (reboot) has been started and output is visible here. 2025-05-14 02:18:27.314170 | orchestrator | 2025-05-14 02:18:27.314354 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-14 02:18:27.314675 | orchestrator | 2025-05-14 02:18:27.316769 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-14 02:18:27.317633 | orchestrator | Wednesday 14 May 2025 02:18:27 +0000 (0:00:00.144) 0:00:00.144 ********* 2025-05-14 02:18:27.407817 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:18:27.408144 | orchestrator | 2025-05-14 02:18:27.409077 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-14 02:18:27.410657 | orchestrator | Wednesday 14 May 2025 02:18:27 +0000 (0:00:00.096) 0:00:00.241 ********* 2025-05-14 02:18:28.347659 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:18:28.348323 | orchestrator | 2025-05-14 02:18:28.350037 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-14 02:18:28.350761 | orchestrator | Wednesday 14 May 2025 02:18:28 +0000 (0:00:00.938) 0:00:01.180 ********* 2025-05-14 02:18:28.452677 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:18:28.453695 | orchestrator | 2025-05-14 02:18:28.455987 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-14 02:18:28.456107 | orchestrator | 2025-05-14 02:18:28.457382 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-14 02:18:28.458226 | orchestrator | Wednesday 14 May 2025 02:18:28 +0000 (0:00:00.106) 0:00:01.286 ********* 2025-05-14 02:18:28.544697 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:18:28.545410 | orchestrator | 2025-05-14 02:18:28.546275 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-14 02:18:28.547157 | orchestrator | Wednesday 14 May 2025 02:18:28 +0000 (0:00:00.091) 0:00:01.378 ********* 2025-05-14 02:18:29.190280 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:18:29.190861 | orchestrator | 2025-05-14 02:18:29.191826 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-14 02:18:29.192750 | orchestrator | Wednesday 14 May 2025 02:18:29 +0000 (0:00:00.643) 0:00:02.021 ********* 2025-05-14 02:18:29.295073 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:18:29.296068 | orchestrator | 2025-05-14 02:18:29.297529 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-14 02:18:29.297631 | orchestrator | 2025-05-14 02:18:29.299056 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-14 02:18:29.299805 | orchestrator | Wednesday 14 May 2025 02:18:29 +0000 (0:00:00.104) 0:00:02.126 ********* 2025-05-14 02:18:29.414082 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:18:29.414795 | orchestrator | 2025-05-14 02:18:29.416054 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-14 02:18:29.417294 | orchestrator | Wednesday 14 May 2025 02:18:29 +0000 (0:00:00.120) 0:00:02.247 ********* 2025-05-14 02:18:30.189209 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:18:30.189430 | orchestrator | 2025-05-14 02:18:30.190960 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-14 02:18:30.193375 | orchestrator | Wednesday 14 May 2025 02:18:30 +0000 (0:00:00.775) 0:00:03.022 ********* 2025-05-14 02:18:30.298064 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:18:30.298148 | orchestrator | 2025-05-14 02:18:30.298516 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-14 02:18:30.299156 | orchestrator | 2025-05-14 02:18:30.299400 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-14 02:18:30.299439 | orchestrator | Wednesday 14 May 2025 02:18:30 +0000 (0:00:00.106) 0:00:03.128 ********* 2025-05-14 02:18:30.406837 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:18:30.408330 | orchestrator | 2025-05-14 02:18:30.408795 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-14 02:18:30.409806 | orchestrator | Wednesday 14 May 2025 02:18:30 +0000 (0:00:00.111) 0:00:03.240 ********* 2025-05-14 02:18:31.086536 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:18:31.086922 | orchestrator | 2025-05-14 02:18:31.088217 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-14 02:18:31.089020 | orchestrator | Wednesday 14 May 2025 02:18:31 +0000 (0:00:00.679) 0:00:03.919 ********* 2025-05-14 02:18:31.186885 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:18:31.187858 | orchestrator | 2025-05-14 02:18:31.188498 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-14 02:18:31.189360 | orchestrator | 2025-05-14 02:18:31.189968 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-14 02:18:31.191015 | orchestrator | Wednesday 14 May 2025 02:18:31 +0000 (0:00:00.098) 0:00:04.017 ********* 2025-05-14 02:18:31.288813 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:18:31.288939 | orchestrator | 2025-05-14 02:18:31.289915 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-14 02:18:31.290550 | orchestrator | Wednesday 14 May 2025 02:18:31 +0000 (0:00:00.103) 0:00:04.121 ********* 2025-05-14 02:18:31.959962 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:18:31.961979 | orchestrator | 2025-05-14 02:18:31.962425 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-14 02:18:31.963297 | orchestrator | Wednesday 14 May 2025 02:18:31 +0000 (0:00:00.669) 0:00:04.790 ********* 2025-05-14 02:18:32.066791 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:18:32.066986 | orchestrator | 2025-05-14 02:18:32.067830 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-14 02:18:32.068504 | orchestrator | 2025-05-14 02:18:32.069156 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-14 02:18:32.070107 | orchestrator | Wednesday 14 May 2025 02:18:32 +0000 (0:00:00.107) 0:00:04.898 ********* 2025-05-14 02:18:32.164746 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:18:32.165286 | orchestrator | 2025-05-14 02:18:32.166444 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-14 02:18:32.166772 | orchestrator | Wednesday 14 May 2025 02:18:32 +0000 (0:00:00.099) 0:00:04.998 ********* 2025-05-14 02:18:32.803788 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:18:32.803884 | orchestrator | 2025-05-14 02:18:32.803966 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-14 02:18:32.804079 | orchestrator | Wednesday 14 May 2025 02:18:32 +0000 (0:00:00.636) 0:00:05.635 ********* 2025-05-14 02:18:32.834000 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:18:32.834218 | orchestrator | 2025-05-14 02:18:32.835222 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:18:32.835531 | orchestrator | 2025-05-14 02:18:32 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 02:18:32.835945 | orchestrator | 2025-05-14 02:18:32 | INFO  | Please wait and do not abort execution. 2025-05-14 02:18:32.837052 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:18:32.837567 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:18:32.838185 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:18:32.838619 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:18:32.840250 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:18:32.840408 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:18:32.840429 | orchestrator | 2025-05-14 02:18:32.840947 | orchestrator | Wednesday 14 May 2025 02:18:32 +0000 (0:00:00.033) 0:00:05.668 ********* 2025-05-14 02:18:32.841162 | orchestrator | =============================================================================== 2025-05-14 02:18:32.841901 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.34s 2025-05-14 02:18:32.842117 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.62s 2025-05-14 02:18:32.842662 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.56s 2025-05-14 02:18:33.354275 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-05-14 02:18:34.806594 | orchestrator | 2025-05-14 02:18:34 | INFO  | Task f4662f4b-28c6-4667-b904-e36742c43e04 (wait-for-connection) was prepared for execution. 2025-05-14 02:18:34.806693 | orchestrator | 2025-05-14 02:18:34 | INFO  | It takes a moment until task f4662f4b-28c6-4667-b904-e36742c43e04 (wait-for-connection) has been started and output is visible here. 2025-05-14 02:18:37.957841 | orchestrator | 2025-05-14 02:18:37.958159 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-05-14 02:18:37.959418 | orchestrator | 2025-05-14 02:18:37.961266 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-05-14 02:18:37.962198 | orchestrator | Wednesday 14 May 2025 02:18:37 +0000 (0:00:00.171) 0:00:00.171 ********* 2025-05-14 02:18:50.714317 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:18:50.714438 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:18:50.714456 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:18:50.714468 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:18:50.714480 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:18:50.714840 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:18:50.715804 | orchestrator | 2025-05-14 02:18:50.716881 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:18:50.717410 | orchestrator | 2025-05-14 02:18:50 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 02:18:50.717812 | orchestrator | 2025-05-14 02:18:50 | INFO  | Please wait and do not abort execution. 2025-05-14 02:18:50.719952 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:18:50.720064 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:18:50.720911 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:18:50.721763 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:18:50.725872 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:18:50.727061 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:18:50.727491 | orchestrator | 2025-05-14 02:18:50.728066 | orchestrator | Wednesday 14 May 2025 02:18:50 +0000 (0:00:12.754) 0:00:12.925 ********* 2025-05-14 02:18:50.728687 | orchestrator | =============================================================================== 2025-05-14 02:18:50.729232 | orchestrator | Wait until remote system is reachable ---------------------------------- 12.75s 2025-05-14 02:18:51.217885 | orchestrator | + osism apply hddtemp 2025-05-14 02:18:52.657910 | orchestrator | 2025-05-14 02:18:52 | INFO  | Task f607130c-f646-474c-924c-a399349913fb (hddtemp) was prepared for execution. 2025-05-14 02:18:52.658013 | orchestrator | 2025-05-14 02:18:52 | INFO  | It takes a moment until task f607130c-f646-474c-924c-a399349913fb (hddtemp) has been started and output is visible here. 2025-05-14 02:18:55.829852 | orchestrator | 2025-05-14 02:18:55.832572 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-05-14 02:18:55.832670 | orchestrator | 2025-05-14 02:18:55.834122 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-05-14 02:18:55.835430 | orchestrator | Wednesday 14 May 2025 02:18:55 +0000 (0:00:00.199) 0:00:00.199 ********* 2025-05-14 02:18:55.976037 | orchestrator | ok: [testbed-manager] 2025-05-14 02:18:56.050248 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:18:56.127404 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:18:56.224615 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:18:56.302640 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:18:56.525182 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:18:56.526124 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:18:56.527209 | orchestrator | 2025-05-14 02:18:56.528238 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-05-14 02:18:56.530778 | orchestrator | Wednesday 14 May 2025 02:18:56 +0000 (0:00:00.696) 0:00:00.896 ********* 2025-05-14 02:18:57.679521 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:18:57.679811 | orchestrator | 2025-05-14 02:18:57.681258 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-05-14 02:18:57.682371 | orchestrator | Wednesday 14 May 2025 02:18:57 +0000 (0:00:01.152) 0:00:02.049 ********* 2025-05-14 02:18:59.523614 | orchestrator | ok: [testbed-manager] 2025-05-14 02:18:59.526150 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:18:59.526943 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:18:59.528066 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:18:59.530773 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:18:59.530813 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:18:59.530827 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:18:59.530838 | orchestrator | 2025-05-14 02:18:59.530850 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-05-14 02:18:59.530863 | orchestrator | Wednesday 14 May 2025 02:18:59 +0000 (0:00:01.846) 0:00:03.896 ********* 2025-05-14 02:19:00.099936 | orchestrator | changed: [testbed-manager] 2025-05-14 02:19:00.534363 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:19:00.536026 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:19:00.536706 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:19:00.537404 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:19:00.539178 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:19:00.539648 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:19:00.540238 | orchestrator | 2025-05-14 02:19:00.540977 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-05-14 02:19:00.541411 | orchestrator | Wednesday 14 May 2025 02:19:00 +0000 (0:00:01.010) 0:00:04.906 ********* 2025-05-14 02:19:02.585864 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:19:02.585978 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:19:02.586780 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:19:02.587343 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:19:02.588415 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:19:02.589673 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:19:02.590856 | orchestrator | ok: [testbed-manager] 2025-05-14 02:19:02.591118 | orchestrator | 2025-05-14 02:19:02.591673 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-05-14 02:19:02.592154 | orchestrator | Wednesday 14 May 2025 02:19:02 +0000 (0:00:02.050) 0:00:06.956 ********* 2025-05-14 02:19:02.810234 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:19:02.885166 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:19:02.959156 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:19:03.031365 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:19:03.138983 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:19:03.139963 | orchestrator | changed: [testbed-manager] 2025-05-14 02:19:03.143430 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:19:03.144074 | orchestrator | 2025-05-14 02:19:03.145061 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-05-14 02:19:03.145755 | orchestrator | Wednesday 14 May 2025 02:19:03 +0000 (0:00:00.556) 0:00:07.513 ********* 2025-05-14 02:19:15.451997 | orchestrator | changed: [testbed-manager] 2025-05-14 02:19:15.452109 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:19:15.452128 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:19:15.452198 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:19:15.452281 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:19:15.453663 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:19:15.454276 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:19:15.454836 | orchestrator | 2025-05-14 02:19:15.455694 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-05-14 02:19:15.456668 | orchestrator | Wednesday 14 May 2025 02:19:15 +0000 (0:00:12.301) 0:00:19.814 ********* 2025-05-14 02:19:16.677929 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:19:16.678266 | orchestrator | 2025-05-14 02:19:16.683733 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-05-14 02:19:16.683781 | orchestrator | Wednesday 14 May 2025 02:19:16 +0000 (0:00:01.231) 0:00:21.046 ********* 2025-05-14 02:19:18.522873 | orchestrator | changed: [testbed-manager] 2025-05-14 02:19:18.523055 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:19:18.524410 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:19:18.524475 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:19:18.525678 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:19:18.526214 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:19:18.526956 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:19:18.527908 | orchestrator | 2025-05-14 02:19:18.528231 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:19:18.529225 | orchestrator | 2025-05-14 02:19:18 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 02:19:18.529277 | orchestrator | 2025-05-14 02:19:18 | INFO  | Please wait and do not abort execution. 2025-05-14 02:19:18.529884 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:19:18.530983 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 02:19:18.531145 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 02:19:18.531497 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 02:19:18.532589 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 02:19:18.532962 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 02:19:18.533415 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 02:19:18.535222 | orchestrator | 2025-05-14 02:19:18.535745 | orchestrator | Wednesday 14 May 2025 02:19:18 +0000 (0:00:01.845) 0:00:22.892 ********* 2025-05-14 02:19:18.536539 | orchestrator | =============================================================================== 2025-05-14 02:19:18.539187 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.30s 2025-05-14 02:19:18.539748 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 2.05s 2025-05-14 02:19:18.540089 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.85s 2025-05-14 02:19:18.540978 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.85s 2025-05-14 02:19:18.541516 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.23s 2025-05-14 02:19:18.542347 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.15s 2025-05-14 02:19:18.542852 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.01s 2025-05-14 02:19:18.543511 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.70s 2025-05-14 02:19:18.543983 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.56s 2025-05-14 02:19:19.015454 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-05-14 02:19:20.527891 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-14 02:19:20.527992 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-05-14 02:19:20.528010 | orchestrator | + local max_attempts=60 2025-05-14 02:19:20.528023 | orchestrator | + local name=ceph-ansible 2025-05-14 02:19:20.528034 | orchestrator | + local attempt_num=1 2025-05-14 02:19:20.528572 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-05-14 02:19:20.557947 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-14 02:19:20.558076 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-05-14 02:19:20.558094 | orchestrator | + local max_attempts=60 2025-05-14 02:19:20.558107 | orchestrator | + local name=kolla-ansible 2025-05-14 02:19:20.558118 | orchestrator | + local attempt_num=1 2025-05-14 02:19:20.558129 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-05-14 02:19:20.586308 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-14 02:19:20.586416 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-05-14 02:19:20.586431 | orchestrator | + local max_attempts=60 2025-05-14 02:19:20.586443 | orchestrator | + local name=osism-ansible 2025-05-14 02:19:20.586454 | orchestrator | + local attempt_num=1 2025-05-14 02:19:20.586880 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-05-14 02:19:20.609947 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-14 02:19:20.610003 | orchestrator | + [[ true == \t\r\u\e ]] 2025-05-14 02:19:20.610062 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-05-14 02:19:20.784943 | orchestrator | ARA in ceph-ansible already disabled. 2025-05-14 02:19:20.921194 | orchestrator | ARA in kolla-ansible already disabled. 2025-05-14 02:19:21.062440 | orchestrator | ARA in osism-ansible already disabled. 2025-05-14 02:19:21.259317 | orchestrator | ARA in osism-kubernetes already disabled. 2025-05-14 02:19:21.260144 | orchestrator | + osism apply gather-facts 2025-05-14 02:19:22.584612 | orchestrator | 2025-05-14 02:19:22 | INFO  | Task ad463821-0b8a-43a8-8c7d-ed7d745ed218 (gather-facts) was prepared for execution. 2025-05-14 02:19:22.584816 | orchestrator | 2025-05-14 02:19:22 | INFO  | It takes a moment until task ad463821-0b8a-43a8-8c7d-ed7d745ed218 (gather-facts) has been started and output is visible here. 2025-05-14 02:19:25.511321 | orchestrator | 2025-05-14 02:19:25.511522 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-14 02:19:25.512615 | orchestrator | 2025-05-14 02:19:25.514815 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-14 02:19:25.515656 | orchestrator | Wednesday 14 May 2025 02:19:25 +0000 (0:00:00.148) 0:00:00.148 ********* 2025-05-14 02:19:30.484451 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:19:30.484874 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:19:30.486254 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:19:30.487241 | orchestrator | ok: [testbed-manager] 2025-05-14 02:19:30.487534 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:19:30.488517 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:19:30.491679 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:19:30.491762 | orchestrator | 2025-05-14 02:19:30.491778 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-14 02:19:30.491790 | orchestrator | 2025-05-14 02:19:30.491801 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-14 02:19:30.491813 | orchestrator | Wednesday 14 May 2025 02:19:30 +0000 (0:00:04.976) 0:00:05.124 ********* 2025-05-14 02:19:30.657670 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:19:30.735887 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:19:30.821830 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:19:30.901589 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:19:30.978002 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:19:31.025105 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:19:31.025480 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:19:31.026316 | orchestrator | 2025-05-14 02:19:31.027070 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:19:31.027851 | orchestrator | 2025-05-14 02:19:31 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 02:19:31.028238 | orchestrator | 2025-05-14 02:19:31 | INFO  | Please wait and do not abort execution. 2025-05-14 02:19:31.028841 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 02:19:31.029899 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 02:19:31.030439 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 02:19:31.031537 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 02:19:31.032235 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 02:19:31.032963 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 02:19:31.033790 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 02:19:31.034388 | orchestrator | 2025-05-14 02:19:31.034894 | orchestrator | Wednesday 14 May 2025 02:19:31 +0000 (0:00:00.540) 0:00:05.665 ********* 2025-05-14 02:19:31.035196 | orchestrator | =============================================================================== 2025-05-14 02:19:31.035913 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.98s 2025-05-14 02:19:31.036250 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.54s 2025-05-14 02:19:31.638524 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-05-14 02:19:31.656315 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-05-14 02:19:31.667778 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-05-14 02:19:31.686344 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-05-14 02:19:31.708094 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-05-14 02:19:31.723356 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-05-14 02:19:31.736310 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-05-14 02:19:31.757974 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-05-14 02:19:31.777783 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-05-14 02:19:31.798431 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-05-14 02:19:31.817520 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-05-14 02:19:31.837826 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-05-14 02:19:31.853383 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-05-14 02:19:31.866863 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-05-14 02:19:31.887279 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-05-14 02:19:31.900121 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-05-14 02:19:31.912968 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-05-14 02:19:31.926611 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-05-14 02:19:31.940867 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-05-14 02:19:31.961043 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-05-14 02:19:31.973993 | orchestrator | + [[ false == \t\r\u\e ]] 2025-05-14 02:19:32.335218 | orchestrator | ok: Runtime: 0:26:04.192180 2025-05-14 02:19:32.441733 | 2025-05-14 02:19:32.441901 | TASK [Deploy services] 2025-05-14 02:19:32.975611 | orchestrator | skipping: Conditional result was False 2025-05-14 02:19:32.989352 | 2025-05-14 02:19:32.989515 | TASK [Deploy in a nutshell] 2025-05-14 02:19:33.683031 | orchestrator | + set -e 2025-05-14 02:19:33.683220 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-14 02:19:33.683243 | orchestrator | ++ export INTERACTIVE=false 2025-05-14 02:19:33.683264 | orchestrator | ++ INTERACTIVE=false 2025-05-14 02:19:33.683278 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-14 02:19:33.683291 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-14 02:19:33.683304 | orchestrator | + source /opt/manager-vars.sh 2025-05-14 02:19:33.683351 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-14 02:19:33.683380 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-14 02:19:33.683394 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-14 02:19:33.683410 | orchestrator | ++ CEPH_VERSION=reef 2025-05-14 02:19:33.683422 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-14 02:19:33.683439 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-14 02:19:33.683450 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-05-14 02:19:33.683471 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-05-14 02:19:33.683482 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-14 02:19:33.683496 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-14 02:19:33.683507 | orchestrator | ++ export ARA=false 2025-05-14 02:19:33.683518 | orchestrator | ++ ARA=false 2025-05-14 02:19:33.683529 | orchestrator | ++ export TEMPEST=false 2025-05-14 02:19:33.683541 | orchestrator | ++ TEMPEST=false 2025-05-14 02:19:33.683552 | orchestrator | ++ export IS_ZUUL=true 2025-05-14 02:19:33.683562 | orchestrator | ++ IS_ZUUL=true 2025-05-14 02:19:33.683586 | orchestrator | 2025-05-14 02:19:33.683598 | orchestrator | # PULL IMAGES 2025-05-14 02:19:33.683609 | orchestrator | 2025-05-14 02:19:33.683621 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.80 2025-05-14 02:19:33.683631 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.80 2025-05-14 02:19:33.683642 | orchestrator | ++ export EXTERNAL_API=false 2025-05-14 02:19:33.683653 | orchestrator | ++ EXTERNAL_API=false 2025-05-14 02:19:33.683683 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-14 02:19:33.683694 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-14 02:19:33.683732 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-14 02:19:33.683745 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-14 02:19:33.683756 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-14 02:19:33.683766 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-14 02:19:33.683777 | orchestrator | + echo 2025-05-14 02:19:33.683788 | orchestrator | + echo '# PULL IMAGES' 2025-05-14 02:19:33.683798 | orchestrator | + echo 2025-05-14 02:19:33.684431 | orchestrator | ++ semver 8.1.0 7.0.0 2025-05-14 02:19:33.746515 | orchestrator | + [[ 1 -ge 0 ]] 2025-05-14 02:19:33.746605 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-05-14 02:19:35.212381 | orchestrator | 2025-05-14 02:19:35 | INFO  | Trying to run play pull-images in environment custom 2025-05-14 02:19:35.260347 | orchestrator | 2025-05-14 02:19:35 | INFO  | Task 719b0c02-2284-41fc-82ed-2ffcd64acdad (pull-images) was prepared for execution. 2025-05-14 02:19:35.260448 | orchestrator | 2025-05-14 02:19:35 | INFO  | It takes a moment until task 719b0c02-2284-41fc-82ed-2ffcd64acdad (pull-images) has been started and output is visible here. 2025-05-14 02:19:38.471776 | orchestrator | 2025-05-14 02:19:38.472666 | orchestrator | PLAY [Pull images] ************************************************************* 2025-05-14 02:19:38.476011 | orchestrator | 2025-05-14 02:19:38.476087 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-05-14 02:19:38.476600 | orchestrator | Wednesday 14 May 2025 02:19:38 +0000 (0:00:00.142) 0:00:00.142 ********* 2025-05-14 02:20:19.836582 | orchestrator | changed: [testbed-manager] 2025-05-14 02:20:19.836764 | orchestrator | 2025-05-14 02:20:19.836784 | orchestrator | TASK [Pull other images] ******************************************************* 2025-05-14 02:20:19.836797 | orchestrator | Wednesday 14 May 2025 02:20:19 +0000 (0:00:41.362) 0:00:41.504 ********* 2025-05-14 02:21:05.977364 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-05-14 02:21:05.977482 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-05-14 02:21:05.977503 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-05-14 02:21:05.977531 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-05-14 02:21:05.978290 | orchestrator | changed: [testbed-manager] => (item=common) 2025-05-14 02:21:05.979970 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-05-14 02:21:05.980352 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-05-14 02:21:05.983677 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-05-14 02:21:05.987089 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-05-14 02:21:05.988116 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-05-14 02:21:05.988928 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-05-14 02:21:05.989954 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-05-14 02:21:05.991351 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-05-14 02:21:05.991932 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-05-14 02:21:05.992514 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-05-14 02:21:05.994457 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-05-14 02:21:05.994934 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-05-14 02:21:05.998117 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-05-14 02:21:05.998890 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-05-14 02:21:05.999511 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-05-14 02:21:06.000086 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-05-14 02:21:06.000721 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-05-14 02:21:06.001215 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-05-14 02:21:06.003178 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-05-14 02:21:06.003817 | orchestrator | 2025-05-14 02:21:06.004835 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:21:06.005470 | orchestrator | 2025-05-14 02:21:06 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 02:21:06.005488 | orchestrator | 2025-05-14 02:21:06 | INFO  | Please wait and do not abort execution. 2025-05-14 02:21:06.008273 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:21:06.009448 | orchestrator | 2025-05-14 02:21:06.009825 | orchestrator | Wednesday 14 May 2025 02:21:05 +0000 (0:00:46.140) 0:01:27.646 ********* 2025-05-14 02:21:06.010414 | orchestrator | =============================================================================== 2025-05-14 02:21:06.011715 | orchestrator | Pull other images ------------------------------------------------------ 46.14s 2025-05-14 02:21:06.016051 | orchestrator | Pull keystone image ---------------------------------------------------- 41.36s 2025-05-14 02:21:08.346647 | orchestrator | 2025-05-14 02:21:08 | INFO  | Trying to run play wipe-partitions in environment custom 2025-05-14 02:21:08.395934 | orchestrator | 2025-05-14 02:21:08 | INFO  | Task 80fd97c8-4670-47c8-8145-622964552102 (wipe-partitions) was prepared for execution. 2025-05-14 02:21:08.396054 | orchestrator | 2025-05-14 02:21:08 | INFO  | It takes a moment until task 80fd97c8-4670-47c8-8145-622964552102 (wipe-partitions) has been started and output is visible here. 2025-05-14 02:21:11.664223 | orchestrator | 2025-05-14 02:21:11.664576 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-05-14 02:21:11.665006 | orchestrator | 2025-05-14 02:21:11.665606 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-05-14 02:21:11.665892 | orchestrator | Wednesday 14 May 2025 02:21:11 +0000 (0:00:00.127) 0:00:00.127 ********* 2025-05-14 02:21:12.236526 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:21:12.236645 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:21:12.236657 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:21:12.236666 | orchestrator | 2025-05-14 02:21:12.236938 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-05-14 02:21:12.237398 | orchestrator | Wednesday 14 May 2025 02:21:12 +0000 (0:00:00.576) 0:00:00.704 ********* 2025-05-14 02:21:12.386315 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:21:12.477902 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:21:12.479119 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:21:12.479253 | orchestrator | 2025-05-14 02:21:12.480465 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-05-14 02:21:12.484501 | orchestrator | Wednesday 14 May 2025 02:21:12 +0000 (0:00:00.240) 0:00:00.945 ********* 2025-05-14 02:21:13.211628 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:21:13.212573 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:21:13.214465 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:21:13.216355 | orchestrator | 2025-05-14 02:21:13.216853 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-05-14 02:21:13.217882 | orchestrator | Wednesday 14 May 2025 02:21:13 +0000 (0:00:00.731) 0:00:01.676 ********* 2025-05-14 02:21:13.370185 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:21:13.468287 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:21:13.468744 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:21:13.469176 | orchestrator | 2025-05-14 02:21:13.470660 | orchestrator | TASK [Check device availability] *********************************************** 2025-05-14 02:21:13.475193 | orchestrator | Wednesday 14 May 2025 02:21:13 +0000 (0:00:00.259) 0:00:01.935 ********* 2025-05-14 02:21:14.577116 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-05-14 02:21:14.577223 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-05-14 02:21:14.578814 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-05-14 02:21:14.578847 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-05-14 02:21:14.581337 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-05-14 02:21:14.581495 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-05-14 02:21:14.582097 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-05-14 02:21:14.582184 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-05-14 02:21:14.582744 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-05-14 02:21:14.583367 | orchestrator | 2025-05-14 02:21:14.583395 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-05-14 02:21:14.583411 | orchestrator | Wednesday 14 May 2025 02:21:14 +0000 (0:00:01.110) 0:00:03.046 ********* 2025-05-14 02:21:15.971395 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-05-14 02:21:15.973143 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-05-14 02:21:15.973368 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-05-14 02:21:15.975526 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-05-14 02:21:15.975619 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-05-14 02:21:15.975852 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-05-14 02:21:15.976168 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-05-14 02:21:15.976580 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-05-14 02:21:15.976937 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-05-14 02:21:15.977294 | orchestrator | 2025-05-14 02:21:15.977720 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-05-14 02:21:15.978103 | orchestrator | Wednesday 14 May 2025 02:21:15 +0000 (0:00:01.390) 0:00:04.437 ********* 2025-05-14 02:21:18.394224 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-05-14 02:21:18.394470 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-05-14 02:21:18.395314 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-05-14 02:21:18.399229 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-05-14 02:21:18.399316 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-05-14 02:21:18.403833 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-05-14 02:21:18.403875 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-05-14 02:21:18.406798 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-05-14 02:21:18.407449 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-05-14 02:21:18.411796 | orchestrator | 2025-05-14 02:21:18.411823 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-05-14 02:21:18.412020 | orchestrator | Wednesday 14 May 2025 02:21:18 +0000 (0:00:02.420) 0:00:06.858 ********* 2025-05-14 02:21:19.093301 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:21:19.093977 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:21:19.095041 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:21:19.095902 | orchestrator | 2025-05-14 02:21:19.096973 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-05-14 02:21:19.096997 | orchestrator | Wednesday 14 May 2025 02:21:19 +0000 (0:00:00.702) 0:00:07.560 ********* 2025-05-14 02:21:19.680972 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:21:19.681912 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:21:19.683969 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:21:19.687949 | orchestrator | 2025-05-14 02:21:19.688553 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:21:19.688649 | orchestrator | 2025-05-14 02:21:19 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 02:21:19.688667 | orchestrator | 2025-05-14 02:21:19 | INFO  | Please wait and do not abort execution. 2025-05-14 02:21:19.688842 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:21:19.689475 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:21:19.689645 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:21:19.689911 | orchestrator | 2025-05-14 02:21:19.693010 | orchestrator | Wednesday 14 May 2025 02:21:19 +0000 (0:00:00.588) 0:00:08.149 ********* 2025-05-14 02:21:19.693080 | orchestrator | =============================================================================== 2025-05-14 02:21:19.693608 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.42s 2025-05-14 02:21:19.693791 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.39s 2025-05-14 02:21:19.694143 | orchestrator | Check device availability ----------------------------------------------- 1.11s 2025-05-14 02:21:19.694551 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.73s 2025-05-14 02:21:19.694882 | orchestrator | Reload udev rules ------------------------------------------------------- 0.70s 2025-05-14 02:21:19.695231 | orchestrator | Request device events from the kernel ----------------------------------- 0.59s 2025-05-14 02:21:19.695634 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.58s 2025-05-14 02:21:19.695984 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.26s 2025-05-14 02:21:19.696359 | orchestrator | Remove all rook related logical devices --------------------------------- 0.24s 2025-05-14 02:21:21.773794 | orchestrator | 2025-05-14 02:21:21 | INFO  | Task 35a88152-15cd-422d-af51-3fb75fe5b21d (facts) was prepared for execution. 2025-05-14 02:21:21.773891 | orchestrator | 2025-05-14 02:21:21 | INFO  | It takes a moment until task 35a88152-15cd-422d-af51-3fb75fe5b21d (facts) has been started and output is visible here. 2025-05-14 02:21:24.990961 | orchestrator | 2025-05-14 02:21:24.991064 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-14 02:21:24.991189 | orchestrator | 2025-05-14 02:21:24.991574 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-14 02:21:24.992348 | orchestrator | Wednesday 14 May 2025 02:21:24 +0000 (0:00:00.199) 0:00:00.199 ********* 2025-05-14 02:21:26.042794 | orchestrator | ok: [testbed-manager] 2025-05-14 02:21:26.042870 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:21:26.042912 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:21:26.043454 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:21:26.043595 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:21:26.044150 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:21:26.044381 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:21:26.046147 | orchestrator | 2025-05-14 02:21:26.046389 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-14 02:21:26.046762 | orchestrator | Wednesday 14 May 2025 02:21:26 +0000 (0:00:01.049) 0:00:01.248 ********* 2025-05-14 02:21:26.205841 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:21:26.283763 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:21:26.363742 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:21:26.438136 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:21:26.515661 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:21:27.260101 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:21:27.260855 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:21:27.262794 | orchestrator | 2025-05-14 02:21:27.264997 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-14 02:21:27.266190 | orchestrator | 2025-05-14 02:21:27.267718 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-14 02:21:27.269178 | orchestrator | Wednesday 14 May 2025 02:21:27 +0000 (0:00:01.221) 0:00:02.469 ********* 2025-05-14 02:21:31.844918 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:21:31.846808 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:21:31.848736 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:21:31.848847 | orchestrator | ok: [testbed-manager] 2025-05-14 02:21:31.849928 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:21:31.851203 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:21:31.853636 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:21:31.853674 | orchestrator | 2025-05-14 02:21:31.853728 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-14 02:21:31.853751 | orchestrator | 2025-05-14 02:21:31.854099 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-14 02:21:31.854822 | orchestrator | Wednesday 14 May 2025 02:21:31 +0000 (0:00:04.586) 0:00:07.055 ********* 2025-05-14 02:21:32.177422 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:21:32.252042 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:21:32.326658 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:21:32.402328 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:21:32.495101 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:21:32.541624 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:21:32.542289 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:21:32.543590 | orchestrator | 2025-05-14 02:21:32.544546 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:21:32.545172 | orchestrator | 2025-05-14 02:21:32 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 02:21:32.545214 | orchestrator | 2025-05-14 02:21:32 | INFO  | Please wait and do not abort execution. 2025-05-14 02:21:32.545449 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:21:32.546283 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:21:32.546327 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:21:32.546725 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:21:32.546983 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:21:32.547480 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:21:32.547991 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:21:32.548137 | orchestrator | 2025-05-14 02:21:32.548395 | orchestrator | Wednesday 14 May 2025 02:21:32 +0000 (0:00:00.695) 0:00:07.750 ********* 2025-05-14 02:21:32.548987 | orchestrator | =============================================================================== 2025-05-14 02:21:32.549587 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.59s 2025-05-14 02:21:32.549625 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.22s 2025-05-14 02:21:32.549812 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.05s 2025-05-14 02:21:32.550127 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.70s 2025-05-14 02:21:34.921472 | orchestrator | 2025-05-14 02:21:34 | INFO  | Task 835c5e48-65dd-49c0-a34a-778967eda172 (ceph-configure-lvm-volumes) was prepared for execution. 2025-05-14 02:21:34.921567 | orchestrator | 2025-05-14 02:21:34 | INFO  | It takes a moment until task 835c5e48-65dd-49c0-a34a-778967eda172 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-05-14 02:21:38.855475 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-14 02:21:39.537144 | orchestrator | 2025-05-14 02:21:39.537256 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-14 02:21:39.537271 | orchestrator | 2025-05-14 02:21:39.537283 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-14 02:21:39.540455 | orchestrator | Wednesday 14 May 2025 02:21:39 +0000 (0:00:00.584) 0:00:00.584 ********* 2025-05-14 02:21:39.887244 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-14 02:21:39.887460 | orchestrator | 2025-05-14 02:21:39.887942 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-14 02:21:39.892871 | orchestrator | Wednesday 14 May 2025 02:21:39 +0000 (0:00:00.354) 0:00:00.939 ********* 2025-05-14 02:21:40.170002 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:21:40.171680 | orchestrator | 2025-05-14 02:21:40.172567 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:21:40.172594 | orchestrator | Wednesday 14 May 2025 02:21:40 +0000 (0:00:00.283) 0:00:01.223 ********* 2025-05-14 02:21:40.693875 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-05-14 02:21:40.694136 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-05-14 02:21:40.696777 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-05-14 02:21:40.697595 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-05-14 02:21:40.699108 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-05-14 02:21:40.701677 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-05-14 02:21:40.701816 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-05-14 02:21:40.701832 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-05-14 02:21:40.701844 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-05-14 02:21:40.701855 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-05-14 02:21:40.702082 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-05-14 02:21:40.702798 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-05-14 02:21:40.703255 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-05-14 02:21:40.703943 | orchestrator | 2025-05-14 02:21:40.704794 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:21:40.705109 | orchestrator | Wednesday 14 May 2025 02:21:40 +0000 (0:00:00.520) 0:00:01.744 ********* 2025-05-14 02:21:40.893003 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:21:40.894062 | orchestrator | 2025-05-14 02:21:40.895093 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:21:40.896161 | orchestrator | Wednesday 14 May 2025 02:21:40 +0000 (0:00:00.201) 0:00:01.945 ********* 2025-05-14 02:21:41.127071 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:21:41.127822 | orchestrator | 2025-05-14 02:21:41.128477 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:21:41.129291 | orchestrator | Wednesday 14 May 2025 02:21:41 +0000 (0:00:00.235) 0:00:02.181 ********* 2025-05-14 02:21:41.337221 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:21:41.337462 | orchestrator | 2025-05-14 02:21:41.337881 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:21:41.338555 | orchestrator | Wednesday 14 May 2025 02:21:41 +0000 (0:00:00.208) 0:00:02.389 ********* 2025-05-14 02:21:41.536369 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:21:41.538815 | orchestrator | 2025-05-14 02:21:41.543052 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:21:41.543092 | orchestrator | Wednesday 14 May 2025 02:21:41 +0000 (0:00:00.200) 0:00:02.589 ********* 2025-05-14 02:21:41.727025 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:21:41.728548 | orchestrator | 2025-05-14 02:21:41.732274 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:21:41.733630 | orchestrator | Wednesday 14 May 2025 02:21:41 +0000 (0:00:00.190) 0:00:02.779 ********* 2025-05-14 02:21:41.927443 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:21:41.928104 | orchestrator | 2025-05-14 02:21:41.928414 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:21:41.928783 | orchestrator | Wednesday 14 May 2025 02:21:41 +0000 (0:00:00.198) 0:00:02.978 ********* 2025-05-14 02:21:42.144951 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:21:42.147293 | orchestrator | 2025-05-14 02:21:42.148115 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:21:42.148852 | orchestrator | Wednesday 14 May 2025 02:21:42 +0000 (0:00:00.217) 0:00:03.195 ********* 2025-05-14 02:21:42.332613 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:21:42.333039 | orchestrator | 2025-05-14 02:21:42.333070 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:21:42.333082 | orchestrator | Wednesday 14 May 2025 02:21:42 +0000 (0:00:00.183) 0:00:03.379 ********* 2025-05-14 02:21:42.955003 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f4cd3396-0a08-4c9a-a600-88a027dd3314) 2025-05-14 02:21:42.955113 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f4cd3396-0a08-4c9a-a600-88a027dd3314) 2025-05-14 02:21:42.956123 | orchestrator | 2025-05-14 02:21:42.957520 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:21:42.958286 | orchestrator | Wednesday 14 May 2025 02:21:42 +0000 (0:00:00.626) 0:00:04.005 ********* 2025-05-14 02:21:43.894561 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_1098e660-21c4-40f1-8a57-5405cc8713a2) 2025-05-14 02:21:43.894789 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_1098e660-21c4-40f1-8a57-5405cc8713a2) 2025-05-14 02:21:43.895299 | orchestrator | 2025-05-14 02:21:43.895951 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:21:43.896834 | orchestrator | Wednesday 14 May 2025 02:21:43 +0000 (0:00:00.942) 0:00:04.947 ********* 2025-05-14 02:21:44.332611 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_41d88fd2-4f90-4be6-b9c2-0d02d8e1d9f7) 2025-05-14 02:21:44.333683 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_41d88fd2-4f90-4be6-b9c2-0d02d8e1d9f7) 2025-05-14 02:21:44.334295 | orchestrator | 2025-05-14 02:21:44.334431 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:21:44.334517 | orchestrator | Wednesday 14 May 2025 02:21:44 +0000 (0:00:00.436) 0:00:05.384 ********* 2025-05-14 02:21:44.799988 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_37cfb3af-bf99-4b3f-874b-d71467a37a95) 2025-05-14 02:21:44.802923 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_37cfb3af-bf99-4b3f-874b-d71467a37a95) 2025-05-14 02:21:44.805669 | orchestrator | 2025-05-14 02:21:44.805745 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:21:44.805758 | orchestrator | Wednesday 14 May 2025 02:21:44 +0000 (0:00:00.469) 0:00:05.853 ********* 2025-05-14 02:21:45.193192 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-14 02:21:45.193350 | orchestrator | 2025-05-14 02:21:45.194117 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:21:45.194349 | orchestrator | Wednesday 14 May 2025 02:21:45 +0000 (0:00:00.392) 0:00:06.245 ********* 2025-05-14 02:21:45.723351 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-05-14 02:21:45.723590 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-05-14 02:21:45.727667 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-05-14 02:21:45.727754 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-05-14 02:21:45.727767 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-05-14 02:21:45.727779 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-05-14 02:21:45.727790 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-05-14 02:21:45.727856 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-05-14 02:21:45.728192 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-05-14 02:21:45.729147 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-05-14 02:21:45.729420 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-05-14 02:21:45.729893 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-05-14 02:21:45.730610 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-05-14 02:21:45.731645 | orchestrator | 2025-05-14 02:21:45.732104 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:21:45.732207 | orchestrator | Wednesday 14 May 2025 02:21:45 +0000 (0:00:00.530) 0:00:06.775 ********* 2025-05-14 02:21:45.922571 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:21:45.922954 | orchestrator | 2025-05-14 02:21:45.924454 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:21:45.924486 | orchestrator | Wednesday 14 May 2025 02:21:45 +0000 (0:00:00.200) 0:00:06.976 ********* 2025-05-14 02:21:46.127272 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:21:46.127905 | orchestrator | 2025-05-14 02:21:46.128155 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:21:46.128417 | orchestrator | Wednesday 14 May 2025 02:21:46 +0000 (0:00:00.205) 0:00:07.181 ********* 2025-05-14 02:21:46.323981 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:21:46.324071 | orchestrator | 2025-05-14 02:21:46.324084 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:21:46.324096 | orchestrator | Wednesday 14 May 2025 02:21:46 +0000 (0:00:00.190) 0:00:07.372 ********* 2025-05-14 02:21:46.504842 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:21:46.508164 | orchestrator | 2025-05-14 02:21:46.508777 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:21:46.509033 | orchestrator | Wednesday 14 May 2025 02:21:46 +0000 (0:00:00.184) 0:00:07.556 ********* 2025-05-14 02:21:47.182258 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:21:47.183782 | orchestrator | 2025-05-14 02:21:47.183911 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:21:47.183945 | orchestrator | Wednesday 14 May 2025 02:21:47 +0000 (0:00:00.679) 0:00:08.236 ********* 2025-05-14 02:21:47.463100 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:21:47.463439 | orchestrator | 2025-05-14 02:21:47.464453 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:21:47.465193 | orchestrator | Wednesday 14 May 2025 02:21:47 +0000 (0:00:00.278) 0:00:08.515 ********* 2025-05-14 02:21:47.673758 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:21:47.673967 | orchestrator | 2025-05-14 02:21:47.674359 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:21:47.674822 | orchestrator | Wednesday 14 May 2025 02:21:47 +0000 (0:00:00.209) 0:00:08.725 ********* 2025-05-14 02:21:47.928575 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:21:47.928679 | orchestrator | 2025-05-14 02:21:47.931858 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:21:47.932101 | orchestrator | Wednesday 14 May 2025 02:21:47 +0000 (0:00:00.256) 0:00:08.981 ********* 2025-05-14 02:21:49.229902 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-05-14 02:21:49.230812 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-05-14 02:21:49.232024 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-05-14 02:21:49.232450 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-05-14 02:21:49.232991 | orchestrator | 2025-05-14 02:21:49.233832 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:21:49.234370 | orchestrator | Wednesday 14 May 2025 02:21:49 +0000 (0:00:01.302) 0:00:10.283 ********* 2025-05-14 02:21:49.481573 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:21:49.485618 | orchestrator | 2025-05-14 02:21:49.488208 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:21:49.488249 | orchestrator | Wednesday 14 May 2025 02:21:49 +0000 (0:00:00.247) 0:00:10.530 ********* 2025-05-14 02:21:49.764411 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:21:49.764527 | orchestrator | 2025-05-14 02:21:49.765900 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:21:49.766266 | orchestrator | Wednesday 14 May 2025 02:21:49 +0000 (0:00:00.284) 0:00:10.815 ********* 2025-05-14 02:21:49.995283 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:21:49.997530 | orchestrator | 2025-05-14 02:21:49.997591 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:21:49.997613 | orchestrator | Wednesday 14 May 2025 02:21:49 +0000 (0:00:00.233) 0:00:11.048 ********* 2025-05-14 02:21:50.263836 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:21:50.265301 | orchestrator | 2025-05-14 02:21:50.267154 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-14 02:21:50.268122 | orchestrator | Wednesday 14 May 2025 02:21:50 +0000 (0:00:00.267) 0:00:11.316 ********* 2025-05-14 02:21:50.465938 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-05-14 02:21:50.466818 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-05-14 02:21:50.467394 | orchestrator | 2025-05-14 02:21:50.469605 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-14 02:21:50.470945 | orchestrator | Wednesday 14 May 2025 02:21:50 +0000 (0:00:00.199) 0:00:11.516 ********* 2025-05-14 02:21:50.805587 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:21:50.807289 | orchestrator | 2025-05-14 02:21:50.811491 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-14 02:21:50.811523 | orchestrator | Wednesday 14 May 2025 02:21:50 +0000 (0:00:00.341) 0:00:11.857 ********* 2025-05-14 02:21:50.947051 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:21:50.948092 | orchestrator | 2025-05-14 02:21:50.949342 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-14 02:21:50.950436 | orchestrator | Wednesday 14 May 2025 02:21:50 +0000 (0:00:00.139) 0:00:11.997 ********* 2025-05-14 02:21:51.088942 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:21:51.089067 | orchestrator | 2025-05-14 02:21:51.089081 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-14 02:21:51.089091 | orchestrator | Wednesday 14 May 2025 02:21:51 +0000 (0:00:00.140) 0:00:12.138 ********* 2025-05-14 02:21:51.233167 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:21:51.233276 | orchestrator | 2025-05-14 02:21:51.233559 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-14 02:21:51.233809 | orchestrator | Wednesday 14 May 2025 02:21:51 +0000 (0:00:00.147) 0:00:12.285 ********* 2025-05-14 02:21:51.424506 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'cb58592c-122c-52e3-870d-c9748cfaa53d'}}) 2025-05-14 02:21:51.426901 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b14ae20f-13fb-53c3-906d-34f9f68040ad'}}) 2025-05-14 02:21:51.427595 | orchestrator | 2025-05-14 02:21:51.428591 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-14 02:21:51.432167 | orchestrator | Wednesday 14 May 2025 02:21:51 +0000 (0:00:00.188) 0:00:12.474 ********* 2025-05-14 02:21:51.584364 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'cb58592c-122c-52e3-870d-c9748cfaa53d'}})  2025-05-14 02:21:51.584469 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b14ae20f-13fb-53c3-906d-34f9f68040ad'}})  2025-05-14 02:21:51.585811 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:21:51.587591 | orchestrator | 2025-05-14 02:21:51.587617 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-14 02:21:51.587624 | orchestrator | Wednesday 14 May 2025 02:21:51 +0000 (0:00:00.163) 0:00:12.637 ********* 2025-05-14 02:21:51.755943 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'cb58592c-122c-52e3-870d-c9748cfaa53d'}})  2025-05-14 02:21:51.756118 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b14ae20f-13fb-53c3-906d-34f9f68040ad'}})  2025-05-14 02:21:51.756263 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:21:51.757110 | orchestrator | 2025-05-14 02:21:51.757424 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-14 02:21:51.758952 | orchestrator | Wednesday 14 May 2025 02:21:51 +0000 (0:00:00.171) 0:00:12.808 ********* 2025-05-14 02:21:51.931024 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'cb58592c-122c-52e3-870d-c9748cfaa53d'}})  2025-05-14 02:21:51.931930 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b14ae20f-13fb-53c3-906d-34f9f68040ad'}})  2025-05-14 02:21:51.933313 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:21:51.936256 | orchestrator | 2025-05-14 02:21:51.936848 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-14 02:21:51.937618 | orchestrator | Wednesday 14 May 2025 02:21:51 +0000 (0:00:00.174) 0:00:12.983 ********* 2025-05-14 02:21:52.121570 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:21:52.121910 | orchestrator | 2025-05-14 02:21:52.122493 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-14 02:21:52.124751 | orchestrator | Wednesday 14 May 2025 02:21:52 +0000 (0:00:00.190) 0:00:13.173 ********* 2025-05-14 02:21:52.257194 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:21:52.257382 | orchestrator | 2025-05-14 02:21:52.259050 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-14 02:21:52.260669 | orchestrator | Wednesday 14 May 2025 02:21:52 +0000 (0:00:00.137) 0:00:13.310 ********* 2025-05-14 02:21:52.397889 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:21:52.398583 | orchestrator | 2025-05-14 02:21:52.401260 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-14 02:21:52.401587 | orchestrator | Wednesday 14 May 2025 02:21:52 +0000 (0:00:00.139) 0:00:13.450 ********* 2025-05-14 02:21:52.531481 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:21:52.531606 | orchestrator | 2025-05-14 02:21:52.532453 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-14 02:21:52.533331 | orchestrator | Wednesday 14 May 2025 02:21:52 +0000 (0:00:00.132) 0:00:13.582 ********* 2025-05-14 02:21:52.888854 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:21:52.889914 | orchestrator | 2025-05-14 02:21:52.890583 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-14 02:21:52.891943 | orchestrator | Wednesday 14 May 2025 02:21:52 +0000 (0:00:00.359) 0:00:13.941 ********* 2025-05-14 02:21:53.059611 | orchestrator | ok: [testbed-node-3] => { 2025-05-14 02:21:53.059811 | orchestrator |  "ceph_osd_devices": { 2025-05-14 02:21:53.059938 | orchestrator |  "sdb": { 2025-05-14 02:21:53.059957 | orchestrator |  "osd_lvm_uuid": "cb58592c-122c-52e3-870d-c9748cfaa53d" 2025-05-14 02:21:53.060558 | orchestrator |  }, 2025-05-14 02:21:53.064202 | orchestrator |  "sdc": { 2025-05-14 02:21:53.064351 | orchestrator |  "osd_lvm_uuid": "b14ae20f-13fb-53c3-906d-34f9f68040ad" 2025-05-14 02:21:53.064627 | orchestrator |  } 2025-05-14 02:21:53.065005 | orchestrator |  } 2025-05-14 02:21:53.065059 | orchestrator | } 2025-05-14 02:21:53.065481 | orchestrator | 2025-05-14 02:21:53.065861 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-14 02:21:53.066194 | orchestrator | Wednesday 14 May 2025 02:21:53 +0000 (0:00:00.170) 0:00:14.111 ********* 2025-05-14 02:21:53.239046 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:21:53.241920 | orchestrator | 2025-05-14 02:21:53.242184 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-14 02:21:53.242716 | orchestrator | Wednesday 14 May 2025 02:21:53 +0000 (0:00:00.176) 0:00:14.288 ********* 2025-05-14 02:21:53.417861 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:21:53.418725 | orchestrator | 2025-05-14 02:21:53.420609 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-14 02:21:53.425840 | orchestrator | Wednesday 14 May 2025 02:21:53 +0000 (0:00:00.179) 0:00:14.467 ********* 2025-05-14 02:21:53.585644 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:21:53.585803 | orchestrator | 2025-05-14 02:21:53.586240 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-14 02:21:53.587439 | orchestrator | Wednesday 14 May 2025 02:21:53 +0000 (0:00:00.165) 0:00:14.633 ********* 2025-05-14 02:21:54.025032 | orchestrator | changed: [testbed-node-3] => { 2025-05-14 02:21:54.025154 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-14 02:21:54.026069 | orchestrator |  "ceph_osd_devices": { 2025-05-14 02:21:54.026440 | orchestrator |  "sdb": { 2025-05-14 02:21:54.026880 | orchestrator |  "osd_lvm_uuid": "cb58592c-122c-52e3-870d-c9748cfaa53d" 2025-05-14 02:21:54.029847 | orchestrator |  }, 2025-05-14 02:21:54.031400 | orchestrator |  "sdc": { 2025-05-14 02:21:54.031818 | orchestrator |  "osd_lvm_uuid": "b14ae20f-13fb-53c3-906d-34f9f68040ad" 2025-05-14 02:21:54.032467 | orchestrator |  } 2025-05-14 02:21:54.033745 | orchestrator |  }, 2025-05-14 02:21:54.033960 | orchestrator |  "lvm_volumes": [ 2025-05-14 02:21:54.034300 | orchestrator |  { 2025-05-14 02:21:54.034707 | orchestrator |  "data": "osd-block-cb58592c-122c-52e3-870d-c9748cfaa53d", 2025-05-14 02:21:54.036298 | orchestrator |  "data_vg": "ceph-cb58592c-122c-52e3-870d-c9748cfaa53d" 2025-05-14 02:21:54.036967 | orchestrator |  }, 2025-05-14 02:21:54.037066 | orchestrator |  { 2025-05-14 02:21:54.038124 | orchestrator |  "data": "osd-block-b14ae20f-13fb-53c3-906d-34f9f68040ad", 2025-05-14 02:21:54.038450 | orchestrator |  "data_vg": "ceph-b14ae20f-13fb-53c3-906d-34f9f68040ad" 2025-05-14 02:21:54.039820 | orchestrator |  } 2025-05-14 02:21:54.040139 | orchestrator |  ] 2025-05-14 02:21:54.040913 | orchestrator |  } 2025-05-14 02:21:54.043704 | orchestrator | } 2025-05-14 02:21:54.044887 | orchestrator | 2025-05-14 02:21:54.046437 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-14 02:21:54.046916 | orchestrator | Wednesday 14 May 2025 02:21:54 +0000 (0:00:00.439) 0:00:15.073 ********* 2025-05-14 02:21:56.303014 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-14 02:21:56.303131 | orchestrator | 2025-05-14 02:21:56.303835 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-14 02:21:56.308144 | orchestrator | 2025-05-14 02:21:56.311024 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-14 02:21:56.311828 | orchestrator | Wednesday 14 May 2025 02:21:56 +0000 (0:00:02.279) 0:00:17.353 ********* 2025-05-14 02:21:56.615284 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-14 02:21:56.615914 | orchestrator | 2025-05-14 02:21:56.617579 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-14 02:21:56.618354 | orchestrator | Wednesday 14 May 2025 02:21:56 +0000 (0:00:00.313) 0:00:17.667 ********* 2025-05-14 02:21:56.886773 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:21:56.890276 | orchestrator | 2025-05-14 02:21:56.890405 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:21:56.891425 | orchestrator | Wednesday 14 May 2025 02:21:56 +0000 (0:00:00.270) 0:00:17.937 ********* 2025-05-14 02:21:57.468765 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-05-14 02:21:57.469218 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-05-14 02:21:57.470983 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-05-14 02:21:57.471548 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-05-14 02:21:57.474210 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-05-14 02:21:57.474603 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-05-14 02:21:57.476351 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-05-14 02:21:57.476957 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-05-14 02:21:57.477087 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-05-14 02:21:57.477668 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-05-14 02:21:57.477918 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-05-14 02:21:57.478336 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-05-14 02:21:57.480925 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-05-14 02:21:57.481019 | orchestrator | 2025-05-14 02:21:57.481036 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:21:57.481049 | orchestrator | Wednesday 14 May 2025 02:21:57 +0000 (0:00:00.585) 0:00:18.522 ********* 2025-05-14 02:21:57.790466 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:21:57.790556 | orchestrator | 2025-05-14 02:21:57.790610 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:21:57.794006 | orchestrator | Wednesday 14 May 2025 02:21:57 +0000 (0:00:00.320) 0:00:18.843 ********* 2025-05-14 02:21:58.051189 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:21:58.051392 | orchestrator | 2025-05-14 02:21:58.051816 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:21:58.052790 | orchestrator | Wednesday 14 May 2025 02:21:58 +0000 (0:00:00.261) 0:00:19.105 ********* 2025-05-14 02:21:58.267144 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:21:58.267242 | orchestrator | 2025-05-14 02:21:58.267258 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:21:58.267321 | orchestrator | Wednesday 14 May 2025 02:21:58 +0000 (0:00:00.211) 0:00:19.317 ********* 2025-05-14 02:21:58.892951 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:21:58.894256 | orchestrator | 2025-05-14 02:21:58.895939 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:21:58.897007 | orchestrator | Wednesday 14 May 2025 02:21:58 +0000 (0:00:00.624) 0:00:19.941 ********* 2025-05-14 02:21:59.117962 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:21:59.118171 | orchestrator | 2025-05-14 02:21:59.119570 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:21:59.119945 | orchestrator | Wednesday 14 May 2025 02:21:59 +0000 (0:00:00.220) 0:00:20.161 ********* 2025-05-14 02:21:59.325641 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:21:59.326378 | orchestrator | 2025-05-14 02:21:59.326658 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:21:59.331463 | orchestrator | Wednesday 14 May 2025 02:21:59 +0000 (0:00:00.214) 0:00:20.376 ********* 2025-05-14 02:21:59.533470 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:21:59.533559 | orchestrator | 2025-05-14 02:21:59.534342 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:21:59.534369 | orchestrator | Wednesday 14 May 2025 02:21:59 +0000 (0:00:00.209) 0:00:20.586 ********* 2025-05-14 02:21:59.744738 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:21:59.747781 | orchestrator | 2025-05-14 02:21:59.747927 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:21:59.750181 | orchestrator | Wednesday 14 May 2025 02:21:59 +0000 (0:00:00.211) 0:00:20.797 ********* 2025-05-14 02:22:00.177091 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_fb55ef0b-86dc-4261-8469-da65bd85098d) 2025-05-14 02:22:00.177192 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_fb55ef0b-86dc-4261-8469-da65bd85098d) 2025-05-14 02:22:00.177207 | orchestrator | 2025-05-14 02:22:00.177278 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:22:00.178342 | orchestrator | Wednesday 14 May 2025 02:22:00 +0000 (0:00:00.430) 0:00:21.227 ********* 2025-05-14 02:22:00.609165 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5f54ee85-b545-45a6-a856-bcb5a8b0ac61) 2025-05-14 02:22:00.610199 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5f54ee85-b545-45a6-a856-bcb5a8b0ac61) 2025-05-14 02:22:00.614010 | orchestrator | 2025-05-14 02:22:00.614631 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:22:00.615962 | orchestrator | Wednesday 14 May 2025 02:22:00 +0000 (0:00:00.432) 0:00:21.660 ********* 2025-05-14 02:22:01.027036 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_7ac274fd-1a92-402b-b855-ca6b0ab20cf2) 2025-05-14 02:22:01.028365 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_7ac274fd-1a92-402b-b855-ca6b0ab20cf2) 2025-05-14 02:22:01.031905 | orchestrator | 2025-05-14 02:22:01.032413 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:22:01.032958 | orchestrator | Wednesday 14 May 2025 02:22:01 +0000 (0:00:00.417) 0:00:22.078 ********* 2025-05-14 02:22:01.686248 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1d2bee4e-0e3b-437e-a6d5-c0ab15229884) 2025-05-14 02:22:01.686467 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1d2bee4e-0e3b-437e-a6d5-c0ab15229884) 2025-05-14 02:22:01.686488 | orchestrator | 2025-05-14 02:22:01.689588 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:22:01.689665 | orchestrator | Wednesday 14 May 2025 02:22:01 +0000 (0:00:00.661) 0:00:22.740 ********* 2025-05-14 02:22:02.273822 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-14 02:22:02.274906 | orchestrator | 2025-05-14 02:22:02.276122 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:02.276792 | orchestrator | Wednesday 14 May 2025 02:22:02 +0000 (0:00:00.586) 0:00:23.327 ********* 2025-05-14 02:22:02.921616 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-05-14 02:22:02.924382 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-05-14 02:22:02.924508 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-05-14 02:22:02.925367 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-05-14 02:22:02.926199 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-05-14 02:22:02.926609 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-05-14 02:22:02.927426 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-05-14 02:22:02.928318 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-05-14 02:22:02.928550 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-05-14 02:22:02.929095 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-05-14 02:22:02.929616 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-05-14 02:22:02.930181 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-05-14 02:22:02.930453 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-05-14 02:22:02.930969 | orchestrator | 2025-05-14 02:22:02.931494 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:02.931951 | orchestrator | Wednesday 14 May 2025 02:22:02 +0000 (0:00:00.645) 0:00:23.972 ********* 2025-05-14 02:22:03.155616 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:03.155764 | orchestrator | 2025-05-14 02:22:03.155893 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:03.156170 | orchestrator | Wednesday 14 May 2025 02:22:03 +0000 (0:00:00.233) 0:00:24.206 ********* 2025-05-14 02:22:03.345880 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:03.345980 | orchestrator | 2025-05-14 02:22:03.345995 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:03.346192 | orchestrator | Wednesday 14 May 2025 02:22:03 +0000 (0:00:00.190) 0:00:24.396 ********* 2025-05-14 02:22:03.546469 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:03.547190 | orchestrator | 2025-05-14 02:22:03.547902 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:03.550820 | orchestrator | Wednesday 14 May 2025 02:22:03 +0000 (0:00:00.202) 0:00:24.598 ********* 2025-05-14 02:22:03.784988 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:03.788108 | orchestrator | 2025-05-14 02:22:03.788181 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:03.788197 | orchestrator | Wednesday 14 May 2025 02:22:03 +0000 (0:00:00.232) 0:00:24.831 ********* 2025-05-14 02:22:03.983496 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:03.984319 | orchestrator | 2025-05-14 02:22:03.986949 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:03.986974 | orchestrator | Wednesday 14 May 2025 02:22:03 +0000 (0:00:00.204) 0:00:25.035 ********* 2025-05-14 02:22:04.198385 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:04.198874 | orchestrator | 2025-05-14 02:22:04.199388 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:04.199412 | orchestrator | Wednesday 14 May 2025 02:22:04 +0000 (0:00:00.214) 0:00:25.250 ********* 2025-05-14 02:22:04.398163 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:04.399479 | orchestrator | 2025-05-14 02:22:04.399982 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:04.402172 | orchestrator | Wednesday 14 May 2025 02:22:04 +0000 (0:00:00.200) 0:00:25.450 ********* 2025-05-14 02:22:04.602535 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:04.602884 | orchestrator | 2025-05-14 02:22:04.603856 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:04.606446 | orchestrator | Wednesday 14 May 2025 02:22:04 +0000 (0:00:00.204) 0:00:25.655 ********* 2025-05-14 02:22:05.614607 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-05-14 02:22:05.615363 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-05-14 02:22:05.617727 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-05-14 02:22:05.618435 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-05-14 02:22:05.618853 | orchestrator | 2025-05-14 02:22:05.619350 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:05.619583 | orchestrator | Wednesday 14 May 2025 02:22:05 +0000 (0:00:01.010) 0:00:26.666 ********* 2025-05-14 02:22:05.819896 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:05.820851 | orchestrator | 2025-05-14 02:22:05.822337 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:05.823373 | orchestrator | Wednesday 14 May 2025 02:22:05 +0000 (0:00:00.205) 0:00:26.871 ********* 2025-05-14 02:22:06.024307 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:06.025201 | orchestrator | 2025-05-14 02:22:06.025994 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:06.026894 | orchestrator | Wednesday 14 May 2025 02:22:06 +0000 (0:00:00.206) 0:00:27.077 ********* 2025-05-14 02:22:06.211425 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:06.211979 | orchestrator | 2025-05-14 02:22:06.212003 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:06.212542 | orchestrator | Wednesday 14 May 2025 02:22:06 +0000 (0:00:00.185) 0:00:27.263 ********* 2025-05-14 02:22:06.426098 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:06.426216 | orchestrator | 2025-05-14 02:22:06.426826 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-14 02:22:06.428598 | orchestrator | Wednesday 14 May 2025 02:22:06 +0000 (0:00:00.215) 0:00:27.478 ********* 2025-05-14 02:22:06.590423 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-05-14 02:22:06.592644 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-05-14 02:22:06.593833 | orchestrator | 2025-05-14 02:22:06.595195 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-14 02:22:06.596555 | orchestrator | Wednesday 14 May 2025 02:22:06 +0000 (0:00:00.165) 0:00:27.644 ********* 2025-05-14 02:22:06.724339 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:06.725208 | orchestrator | 2025-05-14 02:22:06.725781 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-14 02:22:06.726106 | orchestrator | Wednesday 14 May 2025 02:22:06 +0000 (0:00:00.132) 0:00:27.777 ********* 2025-05-14 02:22:06.830293 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:06.832034 | orchestrator | 2025-05-14 02:22:06.832550 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-14 02:22:06.833101 | orchestrator | Wednesday 14 May 2025 02:22:06 +0000 (0:00:00.106) 0:00:27.883 ********* 2025-05-14 02:22:06.937117 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:06.937223 | orchestrator | 2025-05-14 02:22:06.937341 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-14 02:22:06.937872 | orchestrator | Wednesday 14 May 2025 02:22:06 +0000 (0:00:00.107) 0:00:27.991 ********* 2025-05-14 02:22:07.062621 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:22:07.063421 | orchestrator | 2025-05-14 02:22:07.065353 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-14 02:22:07.066197 | orchestrator | Wednesday 14 May 2025 02:22:07 +0000 (0:00:00.123) 0:00:28.115 ********* 2025-05-14 02:22:07.262565 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '22852bcc-228b-503b-9f2d-d63325c20b67'}}) 2025-05-14 02:22:07.263358 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fc7bdc9b-bbf6-5512-af7e-0ab125570579'}}) 2025-05-14 02:22:07.264389 | orchestrator | 2025-05-14 02:22:07.265179 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-14 02:22:07.265977 | orchestrator | Wednesday 14 May 2025 02:22:07 +0000 (0:00:00.200) 0:00:28.316 ********* 2025-05-14 02:22:07.588361 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '22852bcc-228b-503b-9f2d-d63325c20b67'}})  2025-05-14 02:22:07.592537 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fc7bdc9b-bbf6-5512-af7e-0ab125570579'}})  2025-05-14 02:22:07.592575 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:07.592830 | orchestrator | 2025-05-14 02:22:07.593745 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-14 02:22:07.594523 | orchestrator | Wednesday 14 May 2025 02:22:07 +0000 (0:00:00.325) 0:00:28.641 ********* 2025-05-14 02:22:07.750395 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '22852bcc-228b-503b-9f2d-d63325c20b67'}})  2025-05-14 02:22:07.750999 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fc7bdc9b-bbf6-5512-af7e-0ab125570579'}})  2025-05-14 02:22:07.751832 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:07.752470 | orchestrator | 2025-05-14 02:22:07.753099 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-14 02:22:07.753924 | orchestrator | Wednesday 14 May 2025 02:22:07 +0000 (0:00:00.161) 0:00:28.803 ********* 2025-05-14 02:22:07.894995 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '22852bcc-228b-503b-9f2d-d63325c20b67'}})  2025-05-14 02:22:07.897798 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fc7bdc9b-bbf6-5512-af7e-0ab125570579'}})  2025-05-14 02:22:07.898356 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:07.899019 | orchestrator | 2025-05-14 02:22:07.899677 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-14 02:22:07.900425 | orchestrator | Wednesday 14 May 2025 02:22:07 +0000 (0:00:00.144) 0:00:28.948 ********* 2025-05-14 02:22:08.033257 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:22:08.037558 | orchestrator | 2025-05-14 02:22:08.040462 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-14 02:22:08.042002 | orchestrator | Wednesday 14 May 2025 02:22:08 +0000 (0:00:00.135) 0:00:29.083 ********* 2025-05-14 02:22:08.168339 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:22:08.169085 | orchestrator | 2025-05-14 02:22:08.170370 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-14 02:22:08.171099 | orchestrator | Wednesday 14 May 2025 02:22:08 +0000 (0:00:00.136) 0:00:29.220 ********* 2025-05-14 02:22:08.295850 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:08.296592 | orchestrator | 2025-05-14 02:22:08.296898 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-14 02:22:08.297551 | orchestrator | Wednesday 14 May 2025 02:22:08 +0000 (0:00:00.129) 0:00:29.349 ********* 2025-05-14 02:22:08.419465 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:08.419853 | orchestrator | 2025-05-14 02:22:08.420769 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-14 02:22:08.422481 | orchestrator | Wednesday 14 May 2025 02:22:08 +0000 (0:00:00.124) 0:00:29.473 ********* 2025-05-14 02:22:08.553217 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:08.553403 | orchestrator | 2025-05-14 02:22:08.554251 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-14 02:22:08.554826 | orchestrator | Wednesday 14 May 2025 02:22:08 +0000 (0:00:00.132) 0:00:29.606 ********* 2025-05-14 02:22:08.687234 | orchestrator | ok: [testbed-node-4] => { 2025-05-14 02:22:08.688269 | orchestrator |  "ceph_osd_devices": { 2025-05-14 02:22:08.689003 | orchestrator |  "sdb": { 2025-05-14 02:22:08.689877 | orchestrator |  "osd_lvm_uuid": "22852bcc-228b-503b-9f2d-d63325c20b67" 2025-05-14 02:22:08.690261 | orchestrator |  }, 2025-05-14 02:22:08.691877 | orchestrator |  "sdc": { 2025-05-14 02:22:08.692208 | orchestrator |  "osd_lvm_uuid": "fc7bdc9b-bbf6-5512-af7e-0ab125570579" 2025-05-14 02:22:08.693143 | orchestrator |  } 2025-05-14 02:22:08.693913 | orchestrator |  } 2025-05-14 02:22:08.693998 | orchestrator | } 2025-05-14 02:22:08.694882 | orchestrator | 2025-05-14 02:22:08.695754 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-14 02:22:08.696097 | orchestrator | Wednesday 14 May 2025 02:22:08 +0000 (0:00:00.133) 0:00:29.740 ********* 2025-05-14 02:22:08.838866 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:08.839885 | orchestrator | 2025-05-14 02:22:08.840676 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-14 02:22:08.841750 | orchestrator | Wednesday 14 May 2025 02:22:08 +0000 (0:00:00.151) 0:00:29.892 ********* 2025-05-14 02:22:08.979395 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:08.980850 | orchestrator | 2025-05-14 02:22:08.981986 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-14 02:22:08.984583 | orchestrator | Wednesday 14 May 2025 02:22:08 +0000 (0:00:00.140) 0:00:30.032 ********* 2025-05-14 02:22:09.126535 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:22:09.126646 | orchestrator | 2025-05-14 02:22:09.130660 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-14 02:22:09.134854 | orchestrator | Wednesday 14 May 2025 02:22:09 +0000 (0:00:00.147) 0:00:30.179 ********* 2025-05-14 02:22:09.596658 | orchestrator | changed: [testbed-node-4] => { 2025-05-14 02:22:09.597786 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-14 02:22:09.598819 | orchestrator |  "ceph_osd_devices": { 2025-05-14 02:22:09.600101 | orchestrator |  "sdb": { 2025-05-14 02:22:09.601455 | orchestrator |  "osd_lvm_uuid": "22852bcc-228b-503b-9f2d-d63325c20b67" 2025-05-14 02:22:09.602586 | orchestrator |  }, 2025-05-14 02:22:09.602800 | orchestrator |  "sdc": { 2025-05-14 02:22:09.603653 | orchestrator |  "osd_lvm_uuid": "fc7bdc9b-bbf6-5512-af7e-0ab125570579" 2025-05-14 02:22:09.604412 | orchestrator |  } 2025-05-14 02:22:09.605201 | orchestrator |  }, 2025-05-14 02:22:09.605980 | orchestrator |  "lvm_volumes": [ 2025-05-14 02:22:09.606626 | orchestrator |  { 2025-05-14 02:22:09.607237 | orchestrator |  "data": "osd-block-22852bcc-228b-503b-9f2d-d63325c20b67", 2025-05-14 02:22:09.608136 | orchestrator |  "data_vg": "ceph-22852bcc-228b-503b-9f2d-d63325c20b67" 2025-05-14 02:22:09.608960 | orchestrator |  }, 2025-05-14 02:22:09.611300 | orchestrator |  { 2025-05-14 02:22:09.611870 | orchestrator |  "data": "osd-block-fc7bdc9b-bbf6-5512-af7e-0ab125570579", 2025-05-14 02:22:09.612157 | orchestrator |  "data_vg": "ceph-fc7bdc9b-bbf6-5512-af7e-0ab125570579" 2025-05-14 02:22:09.612799 | orchestrator |  } 2025-05-14 02:22:09.614454 | orchestrator |  ] 2025-05-14 02:22:09.615494 | orchestrator |  } 2025-05-14 02:22:09.615595 | orchestrator | } 2025-05-14 02:22:09.616309 | orchestrator | 2025-05-14 02:22:09.616980 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-14 02:22:09.618280 | orchestrator | Wednesday 14 May 2025 02:22:09 +0000 (0:00:00.469) 0:00:30.649 ********* 2025-05-14 02:22:11.038497 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-14 02:22:11.038943 | orchestrator | 2025-05-14 02:22:11.041546 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-14 02:22:11.042057 | orchestrator | 2025-05-14 02:22:11.042960 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-14 02:22:11.044247 | orchestrator | Wednesday 14 May 2025 02:22:11 +0000 (0:00:01.440) 0:00:32.089 ********* 2025-05-14 02:22:11.273169 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-14 02:22:11.273960 | orchestrator | 2025-05-14 02:22:11.275198 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-14 02:22:11.276062 | orchestrator | Wednesday 14 May 2025 02:22:11 +0000 (0:00:00.235) 0:00:32.325 ********* 2025-05-14 02:22:11.904457 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:22:11.905433 | orchestrator | 2025-05-14 02:22:11.906712 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:22:11.907589 | orchestrator | Wednesday 14 May 2025 02:22:11 +0000 (0:00:00.631) 0:00:32.956 ********* 2025-05-14 02:22:12.329865 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-05-14 02:22:12.330915 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-05-14 02:22:12.332623 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-05-14 02:22:12.333749 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-05-14 02:22:12.334502 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-05-14 02:22:12.335823 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-05-14 02:22:12.336680 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-05-14 02:22:12.337399 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-05-14 02:22:12.338218 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-05-14 02:22:12.339228 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-05-14 02:22:12.339381 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-05-14 02:22:12.340103 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-05-14 02:22:12.340941 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-05-14 02:22:12.341398 | orchestrator | 2025-05-14 02:22:12.342569 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:22:12.343870 | orchestrator | Wednesday 14 May 2025 02:22:12 +0000 (0:00:00.426) 0:00:33.382 ********* 2025-05-14 02:22:12.554534 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:22:12.555135 | orchestrator | 2025-05-14 02:22:12.555839 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:22:12.558819 | orchestrator | Wednesday 14 May 2025 02:22:12 +0000 (0:00:00.224) 0:00:33.606 ********* 2025-05-14 02:22:12.767745 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:22:12.769996 | orchestrator | 2025-05-14 02:22:12.771993 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:22:12.772952 | orchestrator | Wednesday 14 May 2025 02:22:12 +0000 (0:00:00.213) 0:00:33.820 ********* 2025-05-14 02:22:12.983900 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:22:12.984085 | orchestrator | 2025-05-14 02:22:12.984947 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:22:12.985055 | orchestrator | Wednesday 14 May 2025 02:22:12 +0000 (0:00:00.215) 0:00:34.036 ********* 2025-05-14 02:22:13.192271 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:22:13.193292 | orchestrator | 2025-05-14 02:22:13.194159 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:22:13.197002 | orchestrator | Wednesday 14 May 2025 02:22:13 +0000 (0:00:00.208) 0:00:34.244 ********* 2025-05-14 02:22:13.392583 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:22:13.392796 | orchestrator | 2025-05-14 02:22:13.392907 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:22:13.393385 | orchestrator | Wednesday 14 May 2025 02:22:13 +0000 (0:00:00.200) 0:00:34.445 ********* 2025-05-14 02:22:13.627989 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:22:13.628085 | orchestrator | 2025-05-14 02:22:13.628890 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:22:13.629849 | orchestrator | Wednesday 14 May 2025 02:22:13 +0000 (0:00:00.234) 0:00:34.680 ********* 2025-05-14 02:22:13.856402 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:22:13.856568 | orchestrator | 2025-05-14 02:22:13.856584 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:22:13.856662 | orchestrator | Wednesday 14 May 2025 02:22:13 +0000 (0:00:00.226) 0:00:34.907 ********* 2025-05-14 02:22:14.053035 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:22:14.053509 | orchestrator | 2025-05-14 02:22:14.054618 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:22:14.055361 | orchestrator | Wednesday 14 May 2025 02:22:14 +0000 (0:00:00.199) 0:00:35.106 ********* 2025-05-14 02:22:14.904853 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e2db2252-8503-4549-bea5-ecd40c91a84d) 2025-05-14 02:22:14.905858 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e2db2252-8503-4549-bea5-ecd40c91a84d) 2025-05-14 02:22:14.908807 | orchestrator | 2025-05-14 02:22:14.910184 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:22:14.910975 | orchestrator | Wednesday 14 May 2025 02:22:14 +0000 (0:00:00.849) 0:00:35.956 ********* 2025-05-14 02:22:15.322106 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_dfedfdfd-f02f-46ee-b152-0d1db465af93) 2025-05-14 02:22:15.323077 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_dfedfdfd-f02f-46ee-b152-0d1db465af93) 2025-05-14 02:22:15.323812 | orchestrator | 2025-05-14 02:22:15.324470 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:22:15.325280 | orchestrator | Wednesday 14 May 2025 02:22:15 +0000 (0:00:00.418) 0:00:36.374 ********* 2025-05-14 02:22:15.769933 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b728a659-cffd-44e0-b567-754457aa92dd) 2025-05-14 02:22:15.770505 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b728a659-cffd-44e0-b567-754457aa92dd) 2025-05-14 02:22:15.771930 | orchestrator | 2025-05-14 02:22:15.772784 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:22:15.773412 | orchestrator | Wednesday 14 May 2025 02:22:15 +0000 (0:00:00.447) 0:00:36.822 ********* 2025-05-14 02:22:16.203959 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0315b34d-7399-4bf5-aad0-c6c82dbe1c9e) 2025-05-14 02:22:16.204954 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0315b34d-7399-4bf5-aad0-c6c82dbe1c9e) 2025-05-14 02:22:16.206993 | orchestrator | 2025-05-14 02:22:16.207082 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:22:16.207411 | orchestrator | Wednesday 14 May 2025 02:22:16 +0000 (0:00:00.432) 0:00:37.255 ********* 2025-05-14 02:22:16.552487 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-14 02:22:16.553664 | orchestrator | 2025-05-14 02:22:16.554163 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:16.555322 | orchestrator | Wednesday 14 May 2025 02:22:16 +0000 (0:00:00.349) 0:00:37.604 ********* 2025-05-14 02:22:16.982299 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-05-14 02:22:16.982737 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-05-14 02:22:16.983891 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-05-14 02:22:16.986306 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-05-14 02:22:16.986335 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-05-14 02:22:16.986347 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-05-14 02:22:16.986404 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-05-14 02:22:16.987333 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-05-14 02:22:16.988380 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-05-14 02:22:16.989287 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-05-14 02:22:16.990225 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-05-14 02:22:16.990253 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-05-14 02:22:16.990538 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-05-14 02:22:16.990852 | orchestrator | 2025-05-14 02:22:16.991326 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:16.991455 | orchestrator | Wednesday 14 May 2025 02:22:16 +0000 (0:00:00.429) 0:00:38.033 ********* 2025-05-14 02:22:17.193366 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:22:17.193730 | orchestrator | 2025-05-14 02:22:17.194226 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:17.195010 | orchestrator | Wednesday 14 May 2025 02:22:17 +0000 (0:00:00.212) 0:00:38.246 ********* 2025-05-14 02:22:17.393327 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:22:17.394811 | orchestrator | 2025-05-14 02:22:17.395852 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:17.397052 | orchestrator | Wednesday 14 May 2025 02:22:17 +0000 (0:00:00.199) 0:00:38.445 ********* 2025-05-14 02:22:17.602453 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:22:17.603860 | orchestrator | 2025-05-14 02:22:17.604236 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:17.605079 | orchestrator | Wednesday 14 May 2025 02:22:17 +0000 (0:00:00.209) 0:00:38.654 ********* 2025-05-14 02:22:18.255758 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:22:18.255954 | orchestrator | 2025-05-14 02:22:18.257497 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:18.259991 | orchestrator | Wednesday 14 May 2025 02:22:18 +0000 (0:00:00.652) 0:00:39.307 ********* 2025-05-14 02:22:18.462995 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:22:18.463973 | orchestrator | 2025-05-14 02:22:18.464771 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:18.465745 | orchestrator | Wednesday 14 May 2025 02:22:18 +0000 (0:00:00.207) 0:00:39.515 ********* 2025-05-14 02:22:18.673341 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:22:18.673428 | orchestrator | 2025-05-14 02:22:18.674229 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:18.675073 | orchestrator | Wednesday 14 May 2025 02:22:18 +0000 (0:00:00.210) 0:00:39.726 ********* 2025-05-14 02:22:18.894887 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:22:18.896434 | orchestrator | 2025-05-14 02:22:18.896800 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:18.898795 | orchestrator | Wednesday 14 May 2025 02:22:18 +0000 (0:00:00.221) 0:00:39.947 ********* 2025-05-14 02:22:19.100254 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:22:19.100465 | orchestrator | 2025-05-14 02:22:19.100871 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:19.101266 | orchestrator | Wednesday 14 May 2025 02:22:19 +0000 (0:00:00.206) 0:00:40.153 ********* 2025-05-14 02:22:19.742528 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-05-14 02:22:19.743540 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-05-14 02:22:19.747216 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-05-14 02:22:19.747242 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-05-14 02:22:19.747255 | orchestrator | 2025-05-14 02:22:19.748662 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:19.748874 | orchestrator | Wednesday 14 May 2025 02:22:19 +0000 (0:00:00.641) 0:00:40.795 ********* 2025-05-14 02:22:19.946149 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:22:19.946889 | orchestrator | 2025-05-14 02:22:19.947945 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:19.949955 | orchestrator | Wednesday 14 May 2025 02:22:19 +0000 (0:00:00.202) 0:00:40.998 ********* 2025-05-14 02:22:20.161954 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:22:20.162410 | orchestrator | 2025-05-14 02:22:20.162441 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:20.162890 | orchestrator | Wednesday 14 May 2025 02:22:20 +0000 (0:00:00.217) 0:00:41.215 ********* 2025-05-14 02:22:20.371078 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:22:20.371179 | orchestrator | 2025-05-14 02:22:20.371194 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:22:20.371403 | orchestrator | Wednesday 14 May 2025 02:22:20 +0000 (0:00:00.208) 0:00:41.423 ********* 2025-05-14 02:22:20.613659 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:22:20.613842 | orchestrator | 2025-05-14 02:22:20.613924 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-14 02:22:20.614359 | orchestrator | Wednesday 14 May 2025 02:22:20 +0000 (0:00:00.243) 0:00:41.666 ********* 2025-05-14 02:22:20.793890 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-05-14 02:22:20.793982 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-05-14 02:22:20.794173 | orchestrator | 2025-05-14 02:22:20.794784 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-14 02:22:20.794833 | orchestrator | Wednesday 14 May 2025 02:22:20 +0000 (0:00:00.180) 0:00:41.847 ********* 2025-05-14 02:22:21.166219 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:22:21.166380 | orchestrator | 2025-05-14 02:22:21.167212 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-14 02:22:21.168010 | orchestrator | Wednesday 14 May 2025 02:22:21 +0000 (0:00:00.370) 0:00:42.218 ********* 2025-05-14 02:22:21.306777 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:22:21.306986 | orchestrator | 2025-05-14 02:22:21.307690 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-14 02:22:21.308779 | orchestrator | Wednesday 14 May 2025 02:22:21 +0000 (0:00:00.141) 0:00:42.359 ********* 2025-05-14 02:22:21.444906 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:22:21.445258 | orchestrator | 2025-05-14 02:22:21.446430 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-14 02:22:21.447261 | orchestrator | Wednesday 14 May 2025 02:22:21 +0000 (0:00:00.138) 0:00:42.497 ********* 2025-05-14 02:22:21.609896 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:22:21.610138 | orchestrator | 2025-05-14 02:22:21.611601 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-14 02:22:21.612336 | orchestrator | Wednesday 14 May 2025 02:22:21 +0000 (0:00:00.164) 0:00:42.662 ********* 2025-05-14 02:22:21.825637 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4aa0a295-50da-5a6e-9e1c-976797741e16'}}) 2025-05-14 02:22:21.826394 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '19540cc4-3279-5090-817a-02eeffb19a16'}}) 2025-05-14 02:22:21.826429 | orchestrator | 2025-05-14 02:22:21.826444 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-14 02:22:21.826531 | orchestrator | Wednesday 14 May 2025 02:22:21 +0000 (0:00:00.216) 0:00:42.878 ********* 2025-05-14 02:22:21.983544 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4aa0a295-50da-5a6e-9e1c-976797741e16'}})  2025-05-14 02:22:21.983794 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '19540cc4-3279-5090-817a-02eeffb19a16'}})  2025-05-14 02:22:21.984558 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:22:21.985531 | orchestrator | 2025-05-14 02:22:21.985988 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-14 02:22:21.986383 | orchestrator | Wednesday 14 May 2025 02:22:21 +0000 (0:00:00.156) 0:00:43.035 ********* 2025-05-14 02:22:22.146265 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4aa0a295-50da-5a6e-9e1c-976797741e16'}})  2025-05-14 02:22:22.146479 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '19540cc4-3279-5090-817a-02eeffb19a16'}})  2025-05-14 02:22:22.147279 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:22:22.148060 | orchestrator | 2025-05-14 02:22:22.149033 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-14 02:22:22.149473 | orchestrator | Wednesday 14 May 2025 02:22:22 +0000 (0:00:00.164) 0:00:43.200 ********* 2025-05-14 02:22:22.328223 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4aa0a295-50da-5a6e-9e1c-976797741e16'}})  2025-05-14 02:22:22.329138 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '19540cc4-3279-5090-817a-02eeffb19a16'}})  2025-05-14 02:22:22.332127 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:22:22.332152 | orchestrator | 2025-05-14 02:22:22.332485 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-14 02:22:22.333472 | orchestrator | Wednesday 14 May 2025 02:22:22 +0000 (0:00:00.179) 0:00:43.380 ********* 2025-05-14 02:22:22.452392 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:22:22.453659 | orchestrator | 2025-05-14 02:22:22.454441 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-14 02:22:22.455611 | orchestrator | Wednesday 14 May 2025 02:22:22 +0000 (0:00:00.124) 0:00:43.505 ********* 2025-05-14 02:22:22.602198 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:22:22.602786 | orchestrator | 2025-05-14 02:22:22.604427 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-14 02:22:22.605381 | orchestrator | Wednesday 14 May 2025 02:22:22 +0000 (0:00:00.149) 0:00:43.654 ********* 2025-05-14 02:22:22.745331 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:22:22.746717 | orchestrator | 2025-05-14 02:22:22.747196 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-14 02:22:22.748875 | orchestrator | Wednesday 14 May 2025 02:22:22 +0000 (0:00:00.144) 0:00:43.798 ********* 2025-05-14 02:22:23.103543 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:22:23.107174 | orchestrator | 2025-05-14 02:22:23.107262 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-14 02:22:23.107745 | orchestrator | Wednesday 14 May 2025 02:22:23 +0000 (0:00:00.355) 0:00:44.153 ********* 2025-05-14 02:22:23.224991 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:22:23.225763 | orchestrator | 2025-05-14 02:22:23.226598 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-14 02:22:23.228975 | orchestrator | Wednesday 14 May 2025 02:22:23 +0000 (0:00:00.123) 0:00:44.277 ********* 2025-05-14 02:22:23.378971 | orchestrator | ok: [testbed-node-5] => { 2025-05-14 02:22:23.379263 | orchestrator |  "ceph_osd_devices": { 2025-05-14 02:22:23.379635 | orchestrator |  "sdb": { 2025-05-14 02:22:23.380877 | orchestrator |  "osd_lvm_uuid": "4aa0a295-50da-5a6e-9e1c-976797741e16" 2025-05-14 02:22:23.381456 | orchestrator |  }, 2025-05-14 02:22:23.381621 | orchestrator |  "sdc": { 2025-05-14 02:22:23.381814 | orchestrator |  "osd_lvm_uuid": "19540cc4-3279-5090-817a-02eeffb19a16" 2025-05-14 02:22:23.382108 | orchestrator |  } 2025-05-14 02:22:23.382600 | orchestrator |  } 2025-05-14 02:22:23.383054 | orchestrator | } 2025-05-14 02:22:23.383134 | orchestrator | 2025-05-14 02:22:23.385850 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-14 02:22:23.385966 | orchestrator | Wednesday 14 May 2025 02:22:23 +0000 (0:00:00.155) 0:00:44.432 ********* 2025-05-14 02:22:23.516124 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:22:23.516324 | orchestrator | 2025-05-14 02:22:23.517433 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-14 02:22:23.520333 | orchestrator | Wednesday 14 May 2025 02:22:23 +0000 (0:00:00.131) 0:00:44.564 ********* 2025-05-14 02:22:23.644936 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:22:23.645730 | orchestrator | 2025-05-14 02:22:23.647275 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-14 02:22:23.648002 | orchestrator | Wednesday 14 May 2025 02:22:23 +0000 (0:00:00.132) 0:00:44.697 ********* 2025-05-14 02:22:23.786817 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:22:23.788429 | orchestrator | 2025-05-14 02:22:23.789298 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-14 02:22:23.790606 | orchestrator | Wednesday 14 May 2025 02:22:23 +0000 (0:00:00.142) 0:00:44.839 ********* 2025-05-14 02:22:24.087375 | orchestrator | changed: [testbed-node-5] => { 2025-05-14 02:22:24.087486 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-14 02:22:24.087505 | orchestrator |  "ceph_osd_devices": { 2025-05-14 02:22:24.087981 | orchestrator |  "sdb": { 2025-05-14 02:22:24.088660 | orchestrator |  "osd_lvm_uuid": "4aa0a295-50da-5a6e-9e1c-976797741e16" 2025-05-14 02:22:24.089189 | orchestrator |  }, 2025-05-14 02:22:24.089312 | orchestrator |  "sdc": { 2025-05-14 02:22:24.089720 | orchestrator |  "osd_lvm_uuid": "19540cc4-3279-5090-817a-02eeffb19a16" 2025-05-14 02:22:24.090013 | orchestrator |  } 2025-05-14 02:22:24.090645 | orchestrator |  }, 2025-05-14 02:22:24.091588 | orchestrator |  "lvm_volumes": [ 2025-05-14 02:22:24.091643 | orchestrator |  { 2025-05-14 02:22:24.091813 | orchestrator |  "data": "osd-block-4aa0a295-50da-5a6e-9e1c-976797741e16", 2025-05-14 02:22:24.092273 | orchestrator |  "data_vg": "ceph-4aa0a295-50da-5a6e-9e1c-976797741e16" 2025-05-14 02:22:24.092512 | orchestrator |  }, 2025-05-14 02:22:24.093057 | orchestrator |  { 2025-05-14 02:22:24.093427 | orchestrator |  "data": "osd-block-19540cc4-3279-5090-817a-02eeffb19a16", 2025-05-14 02:22:24.093861 | orchestrator |  "data_vg": "ceph-19540cc4-3279-5090-817a-02eeffb19a16" 2025-05-14 02:22:24.094259 | orchestrator |  } 2025-05-14 02:22:24.094819 | orchestrator |  ] 2025-05-14 02:22:24.095455 | orchestrator |  } 2025-05-14 02:22:24.095558 | orchestrator | } 2025-05-14 02:22:24.097086 | orchestrator | 2025-05-14 02:22:24.097166 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-14 02:22:24.097192 | orchestrator | Wednesday 14 May 2025 02:22:24 +0000 (0:00:00.300) 0:00:45.139 ********* 2025-05-14 02:22:25.253529 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-14 02:22:25.253745 | orchestrator | 2025-05-14 02:22:25.255061 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:22:25.255418 | orchestrator | 2025-05-14 02:22:25 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 02:22:25.255617 | orchestrator | 2025-05-14 02:22:25 | INFO  | Please wait and do not abort execution. 2025-05-14 02:22:25.257798 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-14 02:22:25.258509 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-14 02:22:25.259258 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-14 02:22:25.260230 | orchestrator | 2025-05-14 02:22:25.260745 | orchestrator | 2025-05-14 02:22:25.261425 | orchestrator | 2025-05-14 02:22:25.262447 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:22:25.263200 | orchestrator | Wednesday 14 May 2025 02:22:25 +0000 (0:00:01.164) 0:00:46.303 ********* 2025-05-14 02:22:25.263742 | orchestrator | =============================================================================== 2025-05-14 02:22:25.264153 | orchestrator | Write configuration file ------------------------------------------------ 4.88s 2025-05-14 02:22:25.264362 | orchestrator | Add known partitions to the list of available block devices ------------- 1.61s 2025-05-14 02:22:25.264915 | orchestrator | Add known links to the list of available block devices ------------------ 1.53s 2025-05-14 02:22:25.265176 | orchestrator | Add known partitions to the list of available block devices ------------- 1.30s 2025-05-14 02:22:25.265585 | orchestrator | Print configuration data ------------------------------------------------ 1.21s 2025-05-14 02:22:25.266151 | orchestrator | Get initial list of available block devices ----------------------------- 1.19s 2025-05-14 02:22:25.266780 | orchestrator | Add known partitions to the list of available block devices ------------- 1.01s 2025-05-14 02:22:25.267237 | orchestrator | Add known links to the list of available block devices ------------------ 0.94s 2025-05-14 02:22:25.267567 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.90s 2025-05-14 02:22:25.268129 | orchestrator | Add known links to the list of available block devices ------------------ 0.85s 2025-05-14 02:22:25.268650 | orchestrator | Generate WAL VG names --------------------------------------------------- 0.85s 2025-05-14 02:22:25.269125 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2025-05-14 02:22:25.269986 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2025-05-14 02:22:25.270739 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2025-05-14 02:22:25.271077 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.65s 2025-05-14 02:22:25.271560 | orchestrator | Add known partitions to the list of available block devices ------------- 0.64s 2025-05-14 02:22:25.272048 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2025-05-14 02:22:25.272351 | orchestrator | Add known links to the list of available block devices ------------------ 0.62s 2025-05-14 02:22:25.272714 | orchestrator | Set DB+WAL devices config data ------------------------------------------ 0.62s 2025-05-14 02:22:25.273454 | orchestrator | Set WAL devices config data --------------------------------------------- 0.61s 2025-05-14 02:22:37.528982 | orchestrator | 2025-05-14 02:22:37 | INFO  | Task 17a9d4bc-5488-4c6f-8b8b-74596a9ac58e is running in background. Output coming soon. 2025-05-14 02:23:14.068283 | orchestrator | 2025-05-14 02:23:05 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-05-14 02:23:14.068386 | orchestrator | 2025-05-14 02:23:05 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-05-14 02:23:14.068411 | orchestrator | 2025-05-14 02:23:05 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-05-14 02:23:14.068424 | orchestrator | 2025-05-14 02:23:06 | INFO  | Handling group overwrites in 99-overwrite 2025-05-14 02:23:14.068436 | orchestrator | 2025-05-14 02:23:06 | INFO  | Removing group frr:children from 60-generic 2025-05-14 02:23:14.068447 | orchestrator | 2025-05-14 02:23:06 | INFO  | Removing group storage:children from 50-kolla 2025-05-14 02:23:14.068457 | orchestrator | 2025-05-14 02:23:06 | INFO  | Removing group netbird:children from 50-infrastruture 2025-05-14 02:23:14.068468 | orchestrator | 2025-05-14 02:23:06 | INFO  | Removing group ceph-mds from 50-ceph 2025-05-14 02:23:14.068479 | orchestrator | 2025-05-14 02:23:06 | INFO  | Removing group ceph-rgw from 50-ceph 2025-05-14 02:23:14.068490 | orchestrator | 2025-05-14 02:23:06 | INFO  | Handling group overwrites in 20-roles 2025-05-14 02:23:14.068500 | orchestrator | 2025-05-14 02:23:06 | INFO  | Removing group k3s_node from 50-infrastruture 2025-05-14 02:23:14.068511 | orchestrator | 2025-05-14 02:23:06 | INFO  | File 20-netbox not found in /inventory.pre/ 2025-05-14 02:23:14.068522 | orchestrator | 2025-05-14 02:23:13 | INFO  | Writing /inventory/clustershell/ansible.yaml with clustershell groups 2025-05-14 02:23:15.758315 | orchestrator | 2025-05-14 02:23:15 | INFO  | Task b12f6960-b685-4386-8f19-0f30a5f4d525 (ceph-create-lvm-devices) was prepared for execution. 2025-05-14 02:23:15.758412 | orchestrator | 2025-05-14 02:23:15 | INFO  | It takes a moment until task b12f6960-b685-4386-8f19-0f30a5f4d525 (ceph-create-lvm-devices) has been started and output is visible here. 2025-05-14 02:23:18.722747 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-14 02:23:19.207386 | orchestrator | 2025-05-14 02:23:19.207687 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-14 02:23:19.207838 | orchestrator | 2025-05-14 02:23:19.208322 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-14 02:23:19.208645 | orchestrator | Wednesday 14 May 2025 02:23:19 +0000 (0:00:00.427) 0:00:00.427 ********* 2025-05-14 02:23:19.442630 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-14 02:23:19.443467 | orchestrator | 2025-05-14 02:23:19.444049 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-14 02:23:19.444622 | orchestrator | Wednesday 14 May 2025 02:23:19 +0000 (0:00:00.235) 0:00:00.663 ********* 2025-05-14 02:23:19.651465 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:23:19.652775 | orchestrator | 2025-05-14 02:23:19.653832 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:23:19.654554 | orchestrator | Wednesday 14 May 2025 02:23:19 +0000 (0:00:00.209) 0:00:00.872 ********* 2025-05-14 02:23:20.264083 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-05-14 02:23:20.265168 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-05-14 02:23:20.265201 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-05-14 02:23:20.267010 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-05-14 02:23:20.267047 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-05-14 02:23:20.267453 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-05-14 02:23:20.268689 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-05-14 02:23:20.269440 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-05-14 02:23:20.269994 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-05-14 02:23:20.270471 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-05-14 02:23:20.270925 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-05-14 02:23:20.271387 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-05-14 02:23:20.271864 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-05-14 02:23:20.272310 | orchestrator | 2025-05-14 02:23:20.272773 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:23:20.273267 | orchestrator | Wednesday 14 May 2025 02:23:20 +0000 (0:00:00.610) 0:00:01.483 ********* 2025-05-14 02:23:20.466240 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:20.466701 | orchestrator | 2025-05-14 02:23:20.467902 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:23:20.468894 | orchestrator | Wednesday 14 May 2025 02:23:20 +0000 (0:00:00.203) 0:00:01.686 ********* 2025-05-14 02:23:20.655376 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:20.656043 | orchestrator | 2025-05-14 02:23:20.656658 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:23:20.657435 | orchestrator | Wednesday 14 May 2025 02:23:20 +0000 (0:00:00.188) 0:00:01.875 ********* 2025-05-14 02:23:20.839443 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:20.839571 | orchestrator | 2025-05-14 02:23:20.839677 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:23:20.839894 | orchestrator | Wednesday 14 May 2025 02:23:20 +0000 (0:00:00.183) 0:00:02.058 ********* 2025-05-14 02:23:21.031174 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:21.033179 | orchestrator | 2025-05-14 02:23:21.034833 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:23:21.034864 | orchestrator | Wednesday 14 May 2025 02:23:21 +0000 (0:00:00.192) 0:00:02.250 ********* 2025-05-14 02:23:21.211981 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:21.212233 | orchestrator | 2025-05-14 02:23:21.213140 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:23:21.213756 | orchestrator | Wednesday 14 May 2025 02:23:21 +0000 (0:00:00.181) 0:00:02.432 ********* 2025-05-14 02:23:21.403020 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:21.403586 | orchestrator | 2025-05-14 02:23:21.403825 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:23:21.404206 | orchestrator | Wednesday 14 May 2025 02:23:21 +0000 (0:00:00.191) 0:00:02.624 ********* 2025-05-14 02:23:21.595694 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:21.596934 | orchestrator | 2025-05-14 02:23:21.597462 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:23:21.598581 | orchestrator | Wednesday 14 May 2025 02:23:21 +0000 (0:00:00.191) 0:00:02.815 ********* 2025-05-14 02:23:21.780873 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:21.781076 | orchestrator | 2025-05-14 02:23:21.783145 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:23:21.783170 | orchestrator | Wednesday 14 May 2025 02:23:21 +0000 (0:00:00.186) 0:00:03.001 ********* 2025-05-14 02:23:22.292056 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f4cd3396-0a08-4c9a-a600-88a027dd3314) 2025-05-14 02:23:22.292916 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f4cd3396-0a08-4c9a-a600-88a027dd3314) 2025-05-14 02:23:22.294378 | orchestrator | 2025-05-14 02:23:22.295261 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:23:22.296195 | orchestrator | Wednesday 14 May 2025 02:23:22 +0000 (0:00:00.509) 0:00:03.511 ********* 2025-05-14 02:23:23.006168 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_1098e660-21c4-40f1-8a57-5405cc8713a2) 2025-05-14 02:23:23.006277 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_1098e660-21c4-40f1-8a57-5405cc8713a2) 2025-05-14 02:23:23.006675 | orchestrator | 2025-05-14 02:23:23.006701 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:23:23.006753 | orchestrator | Wednesday 14 May 2025 02:23:22 +0000 (0:00:00.712) 0:00:04.223 ********* 2025-05-14 02:23:23.456111 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_41d88fd2-4f90-4be6-b9c2-0d02d8e1d9f7) 2025-05-14 02:23:23.456170 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_41d88fd2-4f90-4be6-b9c2-0d02d8e1d9f7) 2025-05-14 02:23:23.456237 | orchestrator | 2025-05-14 02:23:23.456465 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:23:23.456685 | orchestrator | Wednesday 14 May 2025 02:23:23 +0000 (0:00:00.453) 0:00:04.677 ********* 2025-05-14 02:23:23.886804 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_37cfb3af-bf99-4b3f-874b-d71467a37a95) 2025-05-14 02:23:23.886984 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_37cfb3af-bf99-4b3f-874b-d71467a37a95) 2025-05-14 02:23:23.887919 | orchestrator | 2025-05-14 02:23:23.894848 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:23:23.894903 | orchestrator | Wednesday 14 May 2025 02:23:23 +0000 (0:00:00.429) 0:00:05.106 ********* 2025-05-14 02:23:24.248639 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-14 02:23:24.248885 | orchestrator | 2025-05-14 02:23:24.248910 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:23:24.248923 | orchestrator | Wednesday 14 May 2025 02:23:24 +0000 (0:00:00.363) 0:00:05.469 ********* 2025-05-14 02:23:24.786748 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-05-14 02:23:24.786878 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-05-14 02:23:24.787231 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-05-14 02:23:24.787920 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-05-14 02:23:24.788464 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-05-14 02:23:24.789897 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-05-14 02:23:24.790362 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-05-14 02:23:24.791487 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-05-14 02:23:24.792768 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-05-14 02:23:24.794229 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-05-14 02:23:24.794588 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-05-14 02:23:24.797341 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-05-14 02:23:24.798013 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-05-14 02:23:24.798492 | orchestrator | 2025-05-14 02:23:24.799090 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:23:24.799611 | orchestrator | Wednesday 14 May 2025 02:23:24 +0000 (0:00:00.536) 0:00:06.006 ********* 2025-05-14 02:23:24.997191 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:24.997294 | orchestrator | 2025-05-14 02:23:24.997399 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:23:24.998148 | orchestrator | Wednesday 14 May 2025 02:23:24 +0000 (0:00:00.207) 0:00:06.213 ********* 2025-05-14 02:23:25.216836 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:25.217065 | orchestrator | 2025-05-14 02:23:25.218356 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:23:25.218876 | orchestrator | Wednesday 14 May 2025 02:23:25 +0000 (0:00:00.223) 0:00:06.436 ********* 2025-05-14 02:23:25.430772 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:25.432377 | orchestrator | 2025-05-14 02:23:25.433948 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:23:25.435125 | orchestrator | Wednesday 14 May 2025 02:23:25 +0000 (0:00:00.215) 0:00:06.651 ********* 2025-05-14 02:23:25.642496 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:25.643366 | orchestrator | 2025-05-14 02:23:25.644516 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:23:25.645615 | orchestrator | Wednesday 14 May 2025 02:23:25 +0000 (0:00:00.203) 0:00:06.855 ********* 2025-05-14 02:23:26.229804 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:26.229902 | orchestrator | 2025-05-14 02:23:26.229917 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:23:26.230324 | orchestrator | Wednesday 14 May 2025 02:23:26 +0000 (0:00:00.593) 0:00:07.448 ********* 2025-05-14 02:23:26.439527 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:26.440189 | orchestrator | 2025-05-14 02:23:26.440652 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:23:26.440834 | orchestrator | Wednesday 14 May 2025 02:23:26 +0000 (0:00:00.211) 0:00:07.659 ********* 2025-05-14 02:23:26.634343 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:26.634909 | orchestrator | 2025-05-14 02:23:26.637277 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:23:26.637320 | orchestrator | Wednesday 14 May 2025 02:23:26 +0000 (0:00:00.192) 0:00:07.851 ********* 2025-05-14 02:23:26.860957 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:26.861199 | orchestrator | 2025-05-14 02:23:26.861523 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:23:26.861765 | orchestrator | Wednesday 14 May 2025 02:23:26 +0000 (0:00:00.229) 0:00:08.081 ********* 2025-05-14 02:23:27.478085 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-05-14 02:23:27.481570 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-05-14 02:23:27.482128 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-05-14 02:23:27.482655 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-05-14 02:23:27.483611 | orchestrator | 2025-05-14 02:23:27.484170 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:23:27.488744 | orchestrator | Wednesday 14 May 2025 02:23:27 +0000 (0:00:00.616) 0:00:08.698 ********* 2025-05-14 02:23:27.668153 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:27.668854 | orchestrator | 2025-05-14 02:23:27.673017 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:23:27.673118 | orchestrator | Wednesday 14 May 2025 02:23:27 +0000 (0:00:00.191) 0:00:08.889 ********* 2025-05-14 02:23:27.854959 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:27.855094 | orchestrator | 2025-05-14 02:23:27.855172 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:23:27.859550 | orchestrator | Wednesday 14 May 2025 02:23:27 +0000 (0:00:00.185) 0:00:09.074 ********* 2025-05-14 02:23:28.063050 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:28.064446 | orchestrator | 2025-05-14 02:23:28.066265 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:23:28.067025 | orchestrator | Wednesday 14 May 2025 02:23:28 +0000 (0:00:00.207) 0:00:09.282 ********* 2025-05-14 02:23:28.248336 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:28.249199 | orchestrator | 2025-05-14 02:23:28.249914 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-14 02:23:28.253824 | orchestrator | Wednesday 14 May 2025 02:23:28 +0000 (0:00:00.187) 0:00:09.469 ********* 2025-05-14 02:23:28.371220 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:28.371999 | orchestrator | 2025-05-14 02:23:28.372525 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-14 02:23:28.372629 | orchestrator | Wednesday 14 May 2025 02:23:28 +0000 (0:00:00.120) 0:00:09.590 ********* 2025-05-14 02:23:28.564587 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'cb58592c-122c-52e3-870d-c9748cfaa53d'}}) 2025-05-14 02:23:28.564698 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b14ae20f-13fb-53c3-906d-34f9f68040ad'}}) 2025-05-14 02:23:28.564858 | orchestrator | 2025-05-14 02:23:28.564940 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-14 02:23:28.565190 | orchestrator | Wednesday 14 May 2025 02:23:28 +0000 (0:00:00.193) 0:00:09.783 ********* 2025-05-14 02:23:30.724874 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-cb58592c-122c-52e3-870d-c9748cfaa53d', 'data_vg': 'ceph-cb58592c-122c-52e3-870d-c9748cfaa53d'}) 2025-05-14 02:23:30.726089 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b14ae20f-13fb-53c3-906d-34f9f68040ad', 'data_vg': 'ceph-b14ae20f-13fb-53c3-906d-34f9f68040ad'}) 2025-05-14 02:23:30.727680 | orchestrator | 2025-05-14 02:23:30.728399 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-14 02:23:30.728937 | orchestrator | Wednesday 14 May 2025 02:23:30 +0000 (0:00:02.159) 0:00:11.943 ********* 2025-05-14 02:23:30.867271 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cb58592c-122c-52e3-870d-c9748cfaa53d', 'data_vg': 'ceph-cb58592c-122c-52e3-870d-c9748cfaa53d'})  2025-05-14 02:23:30.868103 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b14ae20f-13fb-53c3-906d-34f9f68040ad', 'data_vg': 'ceph-b14ae20f-13fb-53c3-906d-34f9f68040ad'})  2025-05-14 02:23:30.868890 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:30.869593 | orchestrator | 2025-05-14 02:23:30.870210 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-14 02:23:30.870854 | orchestrator | Wednesday 14 May 2025 02:23:30 +0000 (0:00:00.142) 0:00:12.086 ********* 2025-05-14 02:23:32.309560 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-cb58592c-122c-52e3-870d-c9748cfaa53d', 'data_vg': 'ceph-cb58592c-122c-52e3-870d-c9748cfaa53d'}) 2025-05-14 02:23:32.309944 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b14ae20f-13fb-53c3-906d-34f9f68040ad', 'data_vg': 'ceph-b14ae20f-13fb-53c3-906d-34f9f68040ad'}) 2025-05-14 02:23:32.310386 | orchestrator | 2025-05-14 02:23:32.311099 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-14 02:23:32.311372 | orchestrator | Wednesday 14 May 2025 02:23:32 +0000 (0:00:01.442) 0:00:13.529 ********* 2025-05-14 02:23:32.475322 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cb58592c-122c-52e3-870d-c9748cfaa53d', 'data_vg': 'ceph-cb58592c-122c-52e3-870d-c9748cfaa53d'})  2025-05-14 02:23:32.475419 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b14ae20f-13fb-53c3-906d-34f9f68040ad', 'data_vg': 'ceph-b14ae20f-13fb-53c3-906d-34f9f68040ad'})  2025-05-14 02:23:32.476783 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:32.480469 | orchestrator | 2025-05-14 02:23:32.481772 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-14 02:23:32.483240 | orchestrator | Wednesday 14 May 2025 02:23:32 +0000 (0:00:00.166) 0:00:13.695 ********* 2025-05-14 02:23:32.627919 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:32.628241 | orchestrator | 2025-05-14 02:23:32.629822 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-14 02:23:32.634078 | orchestrator | Wednesday 14 May 2025 02:23:32 +0000 (0:00:00.153) 0:00:13.848 ********* 2025-05-14 02:23:32.801682 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cb58592c-122c-52e3-870d-c9748cfaa53d', 'data_vg': 'ceph-cb58592c-122c-52e3-870d-c9748cfaa53d'})  2025-05-14 02:23:32.801832 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b14ae20f-13fb-53c3-906d-34f9f68040ad', 'data_vg': 'ceph-b14ae20f-13fb-53c3-906d-34f9f68040ad'})  2025-05-14 02:23:32.801943 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:32.802862 | orchestrator | 2025-05-14 02:23:32.804616 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-14 02:23:32.806101 | orchestrator | Wednesday 14 May 2025 02:23:32 +0000 (0:00:00.170) 0:00:14.019 ********* 2025-05-14 02:23:32.955888 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:32.956565 | orchestrator | 2025-05-14 02:23:32.958135 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-14 02:23:32.961986 | orchestrator | Wednesday 14 May 2025 02:23:32 +0000 (0:00:00.155) 0:00:14.174 ********* 2025-05-14 02:23:33.134359 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cb58592c-122c-52e3-870d-c9748cfaa53d', 'data_vg': 'ceph-cb58592c-122c-52e3-870d-c9748cfaa53d'})  2025-05-14 02:23:33.135972 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b14ae20f-13fb-53c3-906d-34f9f68040ad', 'data_vg': 'ceph-b14ae20f-13fb-53c3-906d-34f9f68040ad'})  2025-05-14 02:23:33.137984 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:33.139060 | orchestrator | 2025-05-14 02:23:33.142443 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-14 02:23:33.142498 | orchestrator | Wednesday 14 May 2025 02:23:33 +0000 (0:00:00.179) 0:00:14.353 ********* 2025-05-14 02:23:33.300437 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:33.301932 | orchestrator | 2025-05-14 02:23:33.302776 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-14 02:23:33.304807 | orchestrator | Wednesday 14 May 2025 02:23:33 +0000 (0:00:00.164) 0:00:14.518 ********* 2025-05-14 02:23:33.644178 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cb58592c-122c-52e3-870d-c9748cfaa53d', 'data_vg': 'ceph-cb58592c-122c-52e3-870d-c9748cfaa53d'})  2025-05-14 02:23:33.650197 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b14ae20f-13fb-53c3-906d-34f9f68040ad', 'data_vg': 'ceph-b14ae20f-13fb-53c3-906d-34f9f68040ad'})  2025-05-14 02:23:33.651378 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:33.652218 | orchestrator | 2025-05-14 02:23:33.652880 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-14 02:23:33.653894 | orchestrator | Wednesday 14 May 2025 02:23:33 +0000 (0:00:00.346) 0:00:14.864 ********* 2025-05-14 02:23:33.795400 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:23:33.798136 | orchestrator | 2025-05-14 02:23:33.799245 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-14 02:23:33.799281 | orchestrator | Wednesday 14 May 2025 02:23:33 +0000 (0:00:00.150) 0:00:15.014 ********* 2025-05-14 02:23:33.987960 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cb58592c-122c-52e3-870d-c9748cfaa53d', 'data_vg': 'ceph-cb58592c-122c-52e3-870d-c9748cfaa53d'})  2025-05-14 02:23:33.988313 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b14ae20f-13fb-53c3-906d-34f9f68040ad', 'data_vg': 'ceph-b14ae20f-13fb-53c3-906d-34f9f68040ad'})  2025-05-14 02:23:33.988341 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:33.989054 | orchestrator | 2025-05-14 02:23:33.989472 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-14 02:23:33.989981 | orchestrator | Wednesday 14 May 2025 02:23:33 +0000 (0:00:00.193) 0:00:15.208 ********* 2025-05-14 02:23:34.164395 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cb58592c-122c-52e3-870d-c9748cfaa53d', 'data_vg': 'ceph-cb58592c-122c-52e3-870d-c9748cfaa53d'})  2025-05-14 02:23:34.165934 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b14ae20f-13fb-53c3-906d-34f9f68040ad', 'data_vg': 'ceph-b14ae20f-13fb-53c3-906d-34f9f68040ad'})  2025-05-14 02:23:34.167819 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:34.168036 | orchestrator | 2025-05-14 02:23:34.169169 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-14 02:23:34.169988 | orchestrator | Wednesday 14 May 2025 02:23:34 +0000 (0:00:00.175) 0:00:15.383 ********* 2025-05-14 02:23:34.358975 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cb58592c-122c-52e3-870d-c9748cfaa53d', 'data_vg': 'ceph-cb58592c-122c-52e3-870d-c9748cfaa53d'})  2025-05-14 02:23:34.359090 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b14ae20f-13fb-53c3-906d-34f9f68040ad', 'data_vg': 'ceph-b14ae20f-13fb-53c3-906d-34f9f68040ad'})  2025-05-14 02:23:34.359174 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:34.359781 | orchestrator | 2025-05-14 02:23:34.360448 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-14 02:23:34.361132 | orchestrator | Wednesday 14 May 2025 02:23:34 +0000 (0:00:00.194) 0:00:15.578 ********* 2025-05-14 02:23:34.509879 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:34.510478 | orchestrator | 2025-05-14 02:23:34.511411 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-14 02:23:34.512010 | orchestrator | Wednesday 14 May 2025 02:23:34 +0000 (0:00:00.151) 0:00:15.729 ********* 2025-05-14 02:23:34.657135 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:34.657367 | orchestrator | 2025-05-14 02:23:34.657860 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-14 02:23:34.658586 | orchestrator | Wednesday 14 May 2025 02:23:34 +0000 (0:00:00.147) 0:00:15.877 ********* 2025-05-14 02:23:34.809131 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:34.809622 | orchestrator | 2025-05-14 02:23:34.810180 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-14 02:23:34.810363 | orchestrator | Wednesday 14 May 2025 02:23:34 +0000 (0:00:00.150) 0:00:16.028 ********* 2025-05-14 02:23:34.972218 | orchestrator | ok: [testbed-node-3] => { 2025-05-14 02:23:34.972425 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-14 02:23:34.972502 | orchestrator | } 2025-05-14 02:23:34.974107 | orchestrator | 2025-05-14 02:23:34.974458 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-14 02:23:34.975175 | orchestrator | Wednesday 14 May 2025 02:23:34 +0000 (0:00:00.163) 0:00:16.192 ********* 2025-05-14 02:23:35.135926 | orchestrator | ok: [testbed-node-3] => { 2025-05-14 02:23:35.136138 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-14 02:23:35.136770 | orchestrator | } 2025-05-14 02:23:35.138826 | orchestrator | 2025-05-14 02:23:35.140208 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-14 02:23:35.140946 | orchestrator | Wednesday 14 May 2025 02:23:35 +0000 (0:00:00.163) 0:00:16.356 ********* 2025-05-14 02:23:35.273771 | orchestrator | ok: [testbed-node-3] => { 2025-05-14 02:23:35.273962 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-14 02:23:35.274910 | orchestrator | } 2025-05-14 02:23:35.275383 | orchestrator | 2025-05-14 02:23:35.276269 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-14 02:23:35.278487 | orchestrator | Wednesday 14 May 2025 02:23:35 +0000 (0:00:00.137) 0:00:16.494 ********* 2025-05-14 02:23:36.408935 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:23:36.409564 | orchestrator | 2025-05-14 02:23:36.410123 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-14 02:23:36.410908 | orchestrator | Wednesday 14 May 2025 02:23:36 +0000 (0:00:01.135) 0:00:17.629 ********* 2025-05-14 02:23:36.929595 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:23:36.930405 | orchestrator | 2025-05-14 02:23:36.932392 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-14 02:23:36.932450 | orchestrator | Wednesday 14 May 2025 02:23:36 +0000 (0:00:00.518) 0:00:18.148 ********* 2025-05-14 02:23:37.474803 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:23:37.475321 | orchestrator | 2025-05-14 02:23:37.477957 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-14 02:23:37.478132 | orchestrator | Wednesday 14 May 2025 02:23:37 +0000 (0:00:00.546) 0:00:18.695 ********* 2025-05-14 02:23:37.602698 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:23:37.604292 | orchestrator | 2025-05-14 02:23:37.605035 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-14 02:23:37.605061 | orchestrator | Wednesday 14 May 2025 02:23:37 +0000 (0:00:00.127) 0:00:18.822 ********* 2025-05-14 02:23:37.709991 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:37.710992 | orchestrator | 2025-05-14 02:23:37.711895 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-14 02:23:37.714060 | orchestrator | Wednesday 14 May 2025 02:23:37 +0000 (0:00:00.108) 0:00:18.931 ********* 2025-05-14 02:23:37.811633 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:37.811847 | orchestrator | 2025-05-14 02:23:37.812074 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-14 02:23:37.812469 | orchestrator | Wednesday 14 May 2025 02:23:37 +0000 (0:00:00.101) 0:00:19.032 ********* 2025-05-14 02:23:37.939207 | orchestrator | ok: [testbed-node-3] => { 2025-05-14 02:23:37.939626 | orchestrator |  "vgs_report": { 2025-05-14 02:23:37.939652 | orchestrator |  "vg": [] 2025-05-14 02:23:37.940693 | orchestrator |  } 2025-05-14 02:23:37.941192 | orchestrator | } 2025-05-14 02:23:37.941603 | orchestrator | 2025-05-14 02:23:37.942119 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-14 02:23:37.942673 | orchestrator | Wednesday 14 May 2025 02:23:37 +0000 (0:00:00.126) 0:00:19.159 ********* 2025-05-14 02:23:38.050301 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:38.050551 | orchestrator | 2025-05-14 02:23:38.051282 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-14 02:23:38.051501 | orchestrator | Wednesday 14 May 2025 02:23:38 +0000 (0:00:00.112) 0:00:19.271 ********* 2025-05-14 02:23:38.172145 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:38.172415 | orchestrator | 2025-05-14 02:23:38.172824 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-14 02:23:38.173395 | orchestrator | Wednesday 14 May 2025 02:23:38 +0000 (0:00:00.121) 0:00:19.393 ********* 2025-05-14 02:23:38.304155 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:38.304903 | orchestrator | 2025-05-14 02:23:38.307413 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-14 02:23:38.309988 | orchestrator | Wednesday 14 May 2025 02:23:38 +0000 (0:00:00.132) 0:00:19.525 ********* 2025-05-14 02:23:38.425410 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:38.425905 | orchestrator | 2025-05-14 02:23:38.426832 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-14 02:23:38.427569 | orchestrator | Wednesday 14 May 2025 02:23:38 +0000 (0:00:00.120) 0:00:19.645 ********* 2025-05-14 02:23:38.691994 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:38.692701 | orchestrator | 2025-05-14 02:23:38.693479 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-14 02:23:38.694158 | orchestrator | Wednesday 14 May 2025 02:23:38 +0000 (0:00:00.266) 0:00:19.912 ********* 2025-05-14 02:23:38.822216 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:38.823054 | orchestrator | 2025-05-14 02:23:38.823954 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-14 02:23:38.824323 | orchestrator | Wednesday 14 May 2025 02:23:38 +0000 (0:00:00.131) 0:00:20.043 ********* 2025-05-14 02:23:38.967994 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:38.969437 | orchestrator | 2025-05-14 02:23:38.969943 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-14 02:23:38.971294 | orchestrator | Wednesday 14 May 2025 02:23:38 +0000 (0:00:00.143) 0:00:20.187 ********* 2025-05-14 02:23:39.106059 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:39.106140 | orchestrator | 2025-05-14 02:23:39.107336 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-14 02:23:39.110163 | orchestrator | Wednesday 14 May 2025 02:23:39 +0000 (0:00:00.138) 0:00:20.326 ********* 2025-05-14 02:23:39.236057 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:39.236154 | orchestrator | 2025-05-14 02:23:39.237090 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-14 02:23:39.237132 | orchestrator | Wednesday 14 May 2025 02:23:39 +0000 (0:00:00.131) 0:00:20.457 ********* 2025-05-14 02:23:39.385258 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:39.385358 | orchestrator | 2025-05-14 02:23:39.385506 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-14 02:23:39.386229 | orchestrator | Wednesday 14 May 2025 02:23:39 +0000 (0:00:00.148) 0:00:20.606 ********* 2025-05-14 02:23:39.523265 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:39.523432 | orchestrator | 2025-05-14 02:23:39.523769 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-14 02:23:39.524157 | orchestrator | Wednesday 14 May 2025 02:23:39 +0000 (0:00:00.136) 0:00:20.743 ********* 2025-05-14 02:23:39.645618 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:39.645857 | orchestrator | 2025-05-14 02:23:39.646661 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-14 02:23:39.649087 | orchestrator | Wednesday 14 May 2025 02:23:39 +0000 (0:00:00.122) 0:00:20.866 ********* 2025-05-14 02:23:39.800339 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:39.801330 | orchestrator | 2025-05-14 02:23:39.803157 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-14 02:23:39.803183 | orchestrator | Wednesday 14 May 2025 02:23:39 +0000 (0:00:00.154) 0:00:21.020 ********* 2025-05-14 02:23:39.928289 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:39.928960 | orchestrator | 2025-05-14 02:23:39.929087 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-14 02:23:39.930771 | orchestrator | Wednesday 14 May 2025 02:23:39 +0000 (0:00:00.128) 0:00:21.149 ********* 2025-05-14 02:23:40.114357 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cb58592c-122c-52e3-870d-c9748cfaa53d', 'data_vg': 'ceph-cb58592c-122c-52e3-870d-c9748cfaa53d'})  2025-05-14 02:23:40.114997 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b14ae20f-13fb-53c3-906d-34f9f68040ad', 'data_vg': 'ceph-b14ae20f-13fb-53c3-906d-34f9f68040ad'})  2025-05-14 02:23:40.115414 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:40.116497 | orchestrator | 2025-05-14 02:23:40.117608 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-14 02:23:40.119434 | orchestrator | Wednesday 14 May 2025 02:23:40 +0000 (0:00:00.184) 0:00:21.334 ********* 2025-05-14 02:23:40.258452 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cb58592c-122c-52e3-870d-c9748cfaa53d', 'data_vg': 'ceph-cb58592c-122c-52e3-870d-c9748cfaa53d'})  2025-05-14 02:23:40.259526 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b14ae20f-13fb-53c3-906d-34f9f68040ad', 'data_vg': 'ceph-b14ae20f-13fb-53c3-906d-34f9f68040ad'})  2025-05-14 02:23:40.260280 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:40.261165 | orchestrator | 2025-05-14 02:23:40.262063 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-14 02:23:40.262859 | orchestrator | Wednesday 14 May 2025 02:23:40 +0000 (0:00:00.145) 0:00:21.479 ********* 2025-05-14 02:23:40.553841 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cb58592c-122c-52e3-870d-c9748cfaa53d', 'data_vg': 'ceph-cb58592c-122c-52e3-870d-c9748cfaa53d'})  2025-05-14 02:23:40.555102 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b14ae20f-13fb-53c3-906d-34f9f68040ad', 'data_vg': 'ceph-b14ae20f-13fb-53c3-906d-34f9f68040ad'})  2025-05-14 02:23:40.556040 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:40.556073 | orchestrator | 2025-05-14 02:23:40.556748 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-14 02:23:40.557181 | orchestrator | Wednesday 14 May 2025 02:23:40 +0000 (0:00:00.295) 0:00:21.774 ********* 2025-05-14 02:23:40.739344 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cb58592c-122c-52e3-870d-c9748cfaa53d', 'data_vg': 'ceph-cb58592c-122c-52e3-870d-c9748cfaa53d'})  2025-05-14 02:23:40.739548 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b14ae20f-13fb-53c3-906d-34f9f68040ad', 'data_vg': 'ceph-b14ae20f-13fb-53c3-906d-34f9f68040ad'})  2025-05-14 02:23:40.740378 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:40.741074 | orchestrator | 2025-05-14 02:23:40.743140 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-14 02:23:40.743166 | orchestrator | Wednesday 14 May 2025 02:23:40 +0000 (0:00:00.184) 0:00:21.959 ********* 2025-05-14 02:23:40.917105 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cb58592c-122c-52e3-870d-c9748cfaa53d', 'data_vg': 'ceph-cb58592c-122c-52e3-870d-c9748cfaa53d'})  2025-05-14 02:23:40.918536 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b14ae20f-13fb-53c3-906d-34f9f68040ad', 'data_vg': 'ceph-b14ae20f-13fb-53c3-906d-34f9f68040ad'})  2025-05-14 02:23:40.918824 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:40.919362 | orchestrator | 2025-05-14 02:23:40.919921 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-14 02:23:40.920220 | orchestrator | Wednesday 14 May 2025 02:23:40 +0000 (0:00:00.178) 0:00:22.138 ********* 2025-05-14 02:23:41.098178 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cb58592c-122c-52e3-870d-c9748cfaa53d', 'data_vg': 'ceph-cb58592c-122c-52e3-870d-c9748cfaa53d'})  2025-05-14 02:23:41.098272 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b14ae20f-13fb-53c3-906d-34f9f68040ad', 'data_vg': 'ceph-b14ae20f-13fb-53c3-906d-34f9f68040ad'})  2025-05-14 02:23:41.098384 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:41.101305 | orchestrator | 2025-05-14 02:23:41.103229 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-14 02:23:41.103253 | orchestrator | Wednesday 14 May 2025 02:23:41 +0000 (0:00:00.180) 0:00:22.318 ********* 2025-05-14 02:23:41.300874 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cb58592c-122c-52e3-870d-c9748cfaa53d', 'data_vg': 'ceph-cb58592c-122c-52e3-870d-c9748cfaa53d'})  2025-05-14 02:23:41.303575 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b14ae20f-13fb-53c3-906d-34f9f68040ad', 'data_vg': 'ceph-b14ae20f-13fb-53c3-906d-34f9f68040ad'})  2025-05-14 02:23:41.304851 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:41.305921 | orchestrator | 2025-05-14 02:23:41.306848 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-14 02:23:41.308370 | orchestrator | Wednesday 14 May 2025 02:23:41 +0000 (0:00:00.202) 0:00:22.520 ********* 2025-05-14 02:23:41.499807 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cb58592c-122c-52e3-870d-c9748cfaa53d', 'data_vg': 'ceph-cb58592c-122c-52e3-870d-c9748cfaa53d'})  2025-05-14 02:23:41.500252 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b14ae20f-13fb-53c3-906d-34f9f68040ad', 'data_vg': 'ceph-b14ae20f-13fb-53c3-906d-34f9f68040ad'})  2025-05-14 02:23:41.501082 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:41.502450 | orchestrator | 2025-05-14 02:23:41.503844 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-14 02:23:41.503931 | orchestrator | Wednesday 14 May 2025 02:23:41 +0000 (0:00:00.199) 0:00:22.719 ********* 2025-05-14 02:23:42.014765 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:23:42.015141 | orchestrator | 2025-05-14 02:23:42.015741 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-14 02:23:42.016641 | orchestrator | Wednesday 14 May 2025 02:23:42 +0000 (0:00:00.512) 0:00:23.232 ********* 2025-05-14 02:23:42.554320 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:23:42.554854 | orchestrator | 2025-05-14 02:23:42.555798 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-14 02:23:42.558265 | orchestrator | Wednesday 14 May 2025 02:23:42 +0000 (0:00:00.540) 0:00:23.773 ********* 2025-05-14 02:23:42.736561 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:23:42.737035 | orchestrator | 2025-05-14 02:23:42.738372 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-14 02:23:42.739445 | orchestrator | Wednesday 14 May 2025 02:23:42 +0000 (0:00:00.182) 0:00:23.956 ********* 2025-05-14 02:23:42.941689 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-b14ae20f-13fb-53c3-906d-34f9f68040ad', 'vg_name': 'ceph-b14ae20f-13fb-53c3-906d-34f9f68040ad'}) 2025-05-14 02:23:42.942619 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-cb58592c-122c-52e3-870d-c9748cfaa53d', 'vg_name': 'ceph-cb58592c-122c-52e3-870d-c9748cfaa53d'}) 2025-05-14 02:23:42.943639 | orchestrator | 2025-05-14 02:23:42.945445 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-14 02:23:42.946575 | orchestrator | Wednesday 14 May 2025 02:23:42 +0000 (0:00:00.203) 0:00:24.160 ********* 2025-05-14 02:23:43.349885 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cb58592c-122c-52e3-870d-c9748cfaa53d', 'data_vg': 'ceph-cb58592c-122c-52e3-870d-c9748cfaa53d'})  2025-05-14 02:23:43.350175 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b14ae20f-13fb-53c3-906d-34f9f68040ad', 'data_vg': 'ceph-b14ae20f-13fb-53c3-906d-34f9f68040ad'})  2025-05-14 02:23:43.350205 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:43.350822 | orchestrator | 2025-05-14 02:23:43.351154 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-14 02:23:43.351672 | orchestrator | Wednesday 14 May 2025 02:23:43 +0000 (0:00:00.410) 0:00:24.571 ********* 2025-05-14 02:23:43.527795 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cb58592c-122c-52e3-870d-c9748cfaa53d', 'data_vg': 'ceph-cb58592c-122c-52e3-870d-c9748cfaa53d'})  2025-05-14 02:23:43.528654 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b14ae20f-13fb-53c3-906d-34f9f68040ad', 'data_vg': 'ceph-b14ae20f-13fb-53c3-906d-34f9f68040ad'})  2025-05-14 02:23:43.529924 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:43.530647 | orchestrator | 2025-05-14 02:23:43.531198 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-14 02:23:43.532167 | orchestrator | Wednesday 14 May 2025 02:23:43 +0000 (0:00:00.176) 0:00:24.747 ********* 2025-05-14 02:23:43.694999 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cb58592c-122c-52e3-870d-c9748cfaa53d', 'data_vg': 'ceph-cb58592c-122c-52e3-870d-c9748cfaa53d'})  2025-05-14 02:23:43.696388 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b14ae20f-13fb-53c3-906d-34f9f68040ad', 'data_vg': 'ceph-b14ae20f-13fb-53c3-906d-34f9f68040ad'})  2025-05-14 02:23:43.697684 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:23:43.698519 | orchestrator | 2025-05-14 02:23:43.699868 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-14 02:23:43.700641 | orchestrator | Wednesday 14 May 2025 02:23:43 +0000 (0:00:00.168) 0:00:24.915 ********* 2025-05-14 02:23:44.381610 | orchestrator | ok: [testbed-node-3] => { 2025-05-14 02:23:44.382584 | orchestrator |  "lvm_report": { 2025-05-14 02:23:44.385103 | orchestrator |  "lv": [ 2025-05-14 02:23:44.385594 | orchestrator |  { 2025-05-14 02:23:44.386234 | orchestrator |  "lv_name": "osd-block-b14ae20f-13fb-53c3-906d-34f9f68040ad", 2025-05-14 02:23:44.387201 | orchestrator |  "vg_name": "ceph-b14ae20f-13fb-53c3-906d-34f9f68040ad" 2025-05-14 02:23:44.387506 | orchestrator |  }, 2025-05-14 02:23:44.388375 | orchestrator |  { 2025-05-14 02:23:44.389345 | orchestrator |  "lv_name": "osd-block-cb58592c-122c-52e3-870d-c9748cfaa53d", 2025-05-14 02:23:44.389889 | orchestrator |  "vg_name": "ceph-cb58592c-122c-52e3-870d-c9748cfaa53d" 2025-05-14 02:23:44.390556 | orchestrator |  } 2025-05-14 02:23:44.390755 | orchestrator |  ], 2025-05-14 02:23:44.391183 | orchestrator |  "pv": [ 2025-05-14 02:23:44.391629 | orchestrator |  { 2025-05-14 02:23:44.391946 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-14 02:23:44.392277 | orchestrator |  "vg_name": "ceph-cb58592c-122c-52e3-870d-c9748cfaa53d" 2025-05-14 02:23:44.393185 | orchestrator |  }, 2025-05-14 02:23:44.393264 | orchestrator |  { 2025-05-14 02:23:44.393642 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-14 02:23:44.394340 | orchestrator |  "vg_name": "ceph-b14ae20f-13fb-53c3-906d-34f9f68040ad" 2025-05-14 02:23:44.395780 | orchestrator |  } 2025-05-14 02:23:44.397060 | orchestrator |  ] 2025-05-14 02:23:44.398149 | orchestrator |  } 2025-05-14 02:23:44.399252 | orchestrator | } 2025-05-14 02:23:44.400307 | orchestrator | 2025-05-14 02:23:44.401231 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-14 02:23:44.402704 | orchestrator | 2025-05-14 02:23:44.402745 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-14 02:23:44.403628 | orchestrator | Wednesday 14 May 2025 02:23:44 +0000 (0:00:00.685) 0:00:25.600 ********* 2025-05-14 02:23:44.873100 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-14 02:23:44.873205 | orchestrator | 2025-05-14 02:23:44.873884 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-14 02:23:44.875282 | orchestrator | Wednesday 14 May 2025 02:23:44 +0000 (0:00:00.493) 0:00:26.094 ********* 2025-05-14 02:23:45.085965 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:23:45.086519 | orchestrator | 2025-05-14 02:23:45.087054 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:23:45.087710 | orchestrator | Wednesday 14 May 2025 02:23:45 +0000 (0:00:00.210) 0:00:26.304 ********* 2025-05-14 02:23:45.504550 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-05-14 02:23:45.504706 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-05-14 02:23:45.505255 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-05-14 02:23:45.505286 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-05-14 02:23:45.505519 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-05-14 02:23:45.506250 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-05-14 02:23:45.506801 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-05-14 02:23:45.507830 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-05-14 02:23:45.508020 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-05-14 02:23:45.508293 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-05-14 02:23:45.508758 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-05-14 02:23:45.509272 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-05-14 02:23:45.509293 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-05-14 02:23:45.509954 | orchestrator | 2025-05-14 02:23:45.510088 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:23:45.510500 | orchestrator | Wednesday 14 May 2025 02:23:45 +0000 (0:00:00.420) 0:00:26.725 ********* 2025-05-14 02:23:45.691334 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:23:45.691524 | orchestrator | 2025-05-14 02:23:45.693068 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:23:45.694008 | orchestrator | Wednesday 14 May 2025 02:23:45 +0000 (0:00:00.185) 0:00:26.911 ********* 2025-05-14 02:23:45.878893 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:23:45.879147 | orchestrator | 2025-05-14 02:23:45.880030 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:23:45.880863 | orchestrator | Wednesday 14 May 2025 02:23:45 +0000 (0:00:00.189) 0:00:27.100 ********* 2025-05-14 02:23:46.065763 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:23:46.066530 | orchestrator | 2025-05-14 02:23:46.067415 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:23:46.068022 | orchestrator | Wednesday 14 May 2025 02:23:46 +0000 (0:00:00.184) 0:00:27.285 ********* 2025-05-14 02:23:46.246322 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:23:46.246656 | orchestrator | 2025-05-14 02:23:46.249115 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:23:46.249161 | orchestrator | Wednesday 14 May 2025 02:23:46 +0000 (0:00:00.181) 0:00:27.466 ********* 2025-05-14 02:23:46.426892 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:23:46.426999 | orchestrator | 2025-05-14 02:23:46.427014 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:23:46.427027 | orchestrator | Wednesday 14 May 2025 02:23:46 +0000 (0:00:00.178) 0:00:27.645 ********* 2025-05-14 02:23:46.616799 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:23:46.618181 | orchestrator | 2025-05-14 02:23:46.618292 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:23:46.619842 | orchestrator | Wednesday 14 May 2025 02:23:46 +0000 (0:00:00.191) 0:00:27.836 ********* 2025-05-14 02:23:46.796797 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:23:46.798090 | orchestrator | 2025-05-14 02:23:46.798785 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:23:46.799806 | orchestrator | Wednesday 14 May 2025 02:23:46 +0000 (0:00:00.180) 0:00:28.017 ********* 2025-05-14 02:23:47.274752 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:23:47.275758 | orchestrator | 2025-05-14 02:23:47.275806 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:23:47.276882 | orchestrator | Wednesday 14 May 2025 02:23:47 +0000 (0:00:00.477) 0:00:28.494 ********* 2025-05-14 02:23:47.674691 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_fb55ef0b-86dc-4261-8469-da65bd85098d) 2025-05-14 02:23:47.674940 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_fb55ef0b-86dc-4261-8469-da65bd85098d) 2025-05-14 02:23:47.674995 | orchestrator | 2025-05-14 02:23:47.675047 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:23:47.675201 | orchestrator | Wednesday 14 May 2025 02:23:47 +0000 (0:00:00.401) 0:00:28.896 ********* 2025-05-14 02:23:48.014922 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5f54ee85-b545-45a6-a856-bcb5a8b0ac61) 2025-05-14 02:23:48.015603 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5f54ee85-b545-45a6-a856-bcb5a8b0ac61) 2025-05-14 02:23:48.016425 | orchestrator | 2025-05-14 02:23:48.018117 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:23:48.018153 | orchestrator | Wednesday 14 May 2025 02:23:48 +0000 (0:00:00.339) 0:00:29.235 ********* 2025-05-14 02:23:48.412868 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_7ac274fd-1a92-402b-b855-ca6b0ab20cf2) 2025-05-14 02:23:48.413124 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_7ac274fd-1a92-402b-b855-ca6b0ab20cf2) 2025-05-14 02:23:48.414538 | orchestrator | 2025-05-14 02:23:48.414806 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:23:48.415054 | orchestrator | Wednesday 14 May 2025 02:23:48 +0000 (0:00:00.396) 0:00:29.632 ********* 2025-05-14 02:23:48.848093 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1d2bee4e-0e3b-437e-a6d5-c0ab15229884) 2025-05-14 02:23:48.848657 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1d2bee4e-0e3b-437e-a6d5-c0ab15229884) 2025-05-14 02:23:48.850760 | orchestrator | 2025-05-14 02:23:48.850807 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:23:48.850821 | orchestrator | Wednesday 14 May 2025 02:23:48 +0000 (0:00:00.436) 0:00:30.069 ********* 2025-05-14 02:23:49.211119 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-14 02:23:49.211199 | orchestrator | 2025-05-14 02:23:49.211207 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:23:49.211543 | orchestrator | Wednesday 14 May 2025 02:23:49 +0000 (0:00:00.361) 0:00:30.431 ********* 2025-05-14 02:23:49.654663 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-05-14 02:23:49.655036 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-05-14 02:23:49.656415 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-05-14 02:23:49.657181 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-05-14 02:23:49.658258 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-05-14 02:23:49.659422 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-05-14 02:23:49.660031 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-05-14 02:23:49.660695 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-05-14 02:23:49.661183 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-05-14 02:23:49.661595 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-05-14 02:23:49.662230 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-05-14 02:23:49.663212 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-05-14 02:23:49.663231 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-05-14 02:23:49.663696 | orchestrator | 2025-05-14 02:23:49.664155 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:23:49.664544 | orchestrator | Wednesday 14 May 2025 02:23:49 +0000 (0:00:00.444) 0:00:30.875 ********* 2025-05-14 02:23:49.871895 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:23:49.872194 | orchestrator | 2025-05-14 02:23:49.873239 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:23:49.874208 | orchestrator | Wednesday 14 May 2025 02:23:49 +0000 (0:00:00.216) 0:00:31.092 ********* 2025-05-14 02:23:50.041121 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:23:50.041662 | orchestrator | 2025-05-14 02:23:50.042427 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:23:50.043157 | orchestrator | Wednesday 14 May 2025 02:23:50 +0000 (0:00:00.169) 0:00:31.262 ********* 2025-05-14 02:23:50.560143 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:23:50.560848 | orchestrator | 2025-05-14 02:23:50.561504 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:23:50.562187 | orchestrator | Wednesday 14 May 2025 02:23:50 +0000 (0:00:00.517) 0:00:31.779 ********* 2025-05-14 02:23:50.767698 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:23:50.768229 | orchestrator | 2025-05-14 02:23:50.768986 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:23:50.769593 | orchestrator | Wednesday 14 May 2025 02:23:50 +0000 (0:00:00.208) 0:00:31.988 ********* 2025-05-14 02:23:50.961587 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:23:50.961865 | orchestrator | 2025-05-14 02:23:50.962943 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:23:50.963937 | orchestrator | Wednesday 14 May 2025 02:23:50 +0000 (0:00:00.193) 0:00:32.181 ********* 2025-05-14 02:23:51.153840 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:23:51.153939 | orchestrator | 2025-05-14 02:23:51.154372 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:23:51.155299 | orchestrator | Wednesday 14 May 2025 02:23:51 +0000 (0:00:00.191) 0:00:32.373 ********* 2025-05-14 02:23:51.338640 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:23:51.338900 | orchestrator | 2025-05-14 02:23:51.343277 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:23:51.343541 | orchestrator | Wednesday 14 May 2025 02:23:51 +0000 (0:00:00.185) 0:00:32.559 ********* 2025-05-14 02:23:51.509541 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:23:51.511230 | orchestrator | 2025-05-14 02:23:51.511808 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:23:51.512551 | orchestrator | Wednesday 14 May 2025 02:23:51 +0000 (0:00:00.171) 0:00:32.730 ********* 2025-05-14 02:23:52.124852 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-05-14 02:23:52.126390 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-05-14 02:23:52.126424 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-05-14 02:23:52.128049 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-05-14 02:23:52.128613 | orchestrator | 2025-05-14 02:23:52.128784 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:23:52.129258 | orchestrator | Wednesday 14 May 2025 02:23:52 +0000 (0:00:00.613) 0:00:33.343 ********* 2025-05-14 02:23:52.312262 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:23:52.312423 | orchestrator | 2025-05-14 02:23:52.313296 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:23:52.313791 | orchestrator | Wednesday 14 May 2025 02:23:52 +0000 (0:00:00.188) 0:00:33.532 ********* 2025-05-14 02:23:52.543842 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:23:52.544191 | orchestrator | 2025-05-14 02:23:52.544913 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:23:52.545661 | orchestrator | Wednesday 14 May 2025 02:23:52 +0000 (0:00:00.231) 0:00:33.764 ********* 2025-05-14 02:23:52.723576 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:23:52.723875 | orchestrator | 2025-05-14 02:23:52.725514 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:23:52.725770 | orchestrator | Wednesday 14 May 2025 02:23:52 +0000 (0:00:00.180) 0:00:33.944 ********* 2025-05-14 02:23:52.909003 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:23:52.909691 | orchestrator | 2025-05-14 02:23:52.910426 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-14 02:23:52.911650 | orchestrator | Wednesday 14 May 2025 02:23:52 +0000 (0:00:00.185) 0:00:34.130 ********* 2025-05-14 02:23:53.188242 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:23:53.188341 | orchestrator | 2025-05-14 02:23:53.189344 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-14 02:23:53.189369 | orchestrator | Wednesday 14 May 2025 02:23:53 +0000 (0:00:00.275) 0:00:34.405 ********* 2025-05-14 02:23:53.375080 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '22852bcc-228b-503b-9f2d-d63325c20b67'}}) 2025-05-14 02:23:53.375181 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fc7bdc9b-bbf6-5512-af7e-0ab125570579'}}) 2025-05-14 02:23:53.375311 | orchestrator | 2025-05-14 02:23:53.376062 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-14 02:23:53.376201 | orchestrator | Wednesday 14 May 2025 02:23:53 +0000 (0:00:00.178) 0:00:34.584 ********* 2025-05-14 02:23:55.330384 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-22852bcc-228b-503b-9f2d-d63325c20b67', 'data_vg': 'ceph-22852bcc-228b-503b-9f2d-d63325c20b67'}) 2025-05-14 02:23:55.330708 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-fc7bdc9b-bbf6-5512-af7e-0ab125570579', 'data_vg': 'ceph-fc7bdc9b-bbf6-5512-af7e-0ab125570579'}) 2025-05-14 02:23:55.331764 | orchestrator | 2025-05-14 02:23:55.332046 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-14 02:23:55.332663 | orchestrator | Wednesday 14 May 2025 02:23:55 +0000 (0:00:01.966) 0:00:36.550 ********* 2025-05-14 02:23:55.467670 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-22852bcc-228b-503b-9f2d-d63325c20b67', 'data_vg': 'ceph-22852bcc-228b-503b-9f2d-d63325c20b67'})  2025-05-14 02:23:55.469092 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fc7bdc9b-bbf6-5512-af7e-0ab125570579', 'data_vg': 'ceph-fc7bdc9b-bbf6-5512-af7e-0ab125570579'})  2025-05-14 02:23:55.470145 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:23:55.470625 | orchestrator | 2025-05-14 02:23:55.471410 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-14 02:23:55.472038 | orchestrator | Wednesday 14 May 2025 02:23:55 +0000 (0:00:00.137) 0:00:36.688 ********* 2025-05-14 02:23:56.747289 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-22852bcc-228b-503b-9f2d-d63325c20b67', 'data_vg': 'ceph-22852bcc-228b-503b-9f2d-d63325c20b67'}) 2025-05-14 02:23:56.748774 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-fc7bdc9b-bbf6-5512-af7e-0ab125570579', 'data_vg': 'ceph-fc7bdc9b-bbf6-5512-af7e-0ab125570579'}) 2025-05-14 02:23:56.748823 | orchestrator | 2025-05-14 02:23:56.749155 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-14 02:23:56.749847 | orchestrator | Wednesday 14 May 2025 02:23:56 +0000 (0:00:01.278) 0:00:37.966 ********* 2025-05-14 02:23:56.910980 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-22852bcc-228b-503b-9f2d-d63325c20b67', 'data_vg': 'ceph-22852bcc-228b-503b-9f2d-d63325c20b67'})  2025-05-14 02:23:56.911494 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fc7bdc9b-bbf6-5512-af7e-0ab125570579', 'data_vg': 'ceph-fc7bdc9b-bbf6-5512-af7e-0ab125570579'})  2025-05-14 02:23:56.913521 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:23:56.915231 | orchestrator | 2025-05-14 02:23:56.915524 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-14 02:23:56.915921 | orchestrator | Wednesday 14 May 2025 02:23:56 +0000 (0:00:00.163) 0:00:38.130 ********* 2025-05-14 02:23:57.039303 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:23:57.039849 | orchestrator | 2025-05-14 02:23:57.040685 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-14 02:23:57.040971 | orchestrator | Wednesday 14 May 2025 02:23:57 +0000 (0:00:00.130) 0:00:38.261 ********* 2025-05-14 02:23:57.182890 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-22852bcc-228b-503b-9f2d-d63325c20b67', 'data_vg': 'ceph-22852bcc-228b-503b-9f2d-d63325c20b67'})  2025-05-14 02:23:57.184169 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fc7bdc9b-bbf6-5512-af7e-0ab125570579', 'data_vg': 'ceph-fc7bdc9b-bbf6-5512-af7e-0ab125570579'})  2025-05-14 02:23:57.184569 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:23:57.185792 | orchestrator | 2025-05-14 02:23:57.186462 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-14 02:23:57.187521 | orchestrator | Wednesday 14 May 2025 02:23:57 +0000 (0:00:00.142) 0:00:38.403 ********* 2025-05-14 02:23:57.309886 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:23:57.310269 | orchestrator | 2025-05-14 02:23:57.311045 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-14 02:23:57.311765 | orchestrator | Wednesday 14 May 2025 02:23:57 +0000 (0:00:00.127) 0:00:38.531 ********* 2025-05-14 02:23:57.604057 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-22852bcc-228b-503b-9f2d-d63325c20b67', 'data_vg': 'ceph-22852bcc-228b-503b-9f2d-d63325c20b67'})  2025-05-14 02:23:57.604485 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fc7bdc9b-bbf6-5512-af7e-0ab125570579', 'data_vg': 'ceph-fc7bdc9b-bbf6-5512-af7e-0ab125570579'})  2025-05-14 02:23:57.605147 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:23:57.607457 | orchestrator | 2025-05-14 02:23:57.607486 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-14 02:23:57.607499 | orchestrator | Wednesday 14 May 2025 02:23:57 +0000 (0:00:00.293) 0:00:38.824 ********* 2025-05-14 02:23:57.740264 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:23:57.741673 | orchestrator | 2025-05-14 02:23:57.742415 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-14 02:23:57.742446 | orchestrator | Wednesday 14 May 2025 02:23:57 +0000 (0:00:00.135) 0:00:38.960 ********* 2025-05-14 02:23:57.903374 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-22852bcc-228b-503b-9f2d-d63325c20b67', 'data_vg': 'ceph-22852bcc-228b-503b-9f2d-d63325c20b67'})  2025-05-14 02:23:57.904121 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fc7bdc9b-bbf6-5512-af7e-0ab125570579', 'data_vg': 'ceph-fc7bdc9b-bbf6-5512-af7e-0ab125570579'})  2025-05-14 02:23:57.905096 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:23:57.905621 | orchestrator | 2025-05-14 02:23:57.906402 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-14 02:23:57.907102 | orchestrator | Wednesday 14 May 2025 02:23:57 +0000 (0:00:00.164) 0:00:39.124 ********* 2025-05-14 02:23:58.047595 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:23:58.048971 | orchestrator | 2025-05-14 02:23:58.049001 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-14 02:23:58.049505 | orchestrator | Wednesday 14 May 2025 02:23:58 +0000 (0:00:00.144) 0:00:39.268 ********* 2025-05-14 02:23:58.172433 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-22852bcc-228b-503b-9f2d-d63325c20b67', 'data_vg': 'ceph-22852bcc-228b-503b-9f2d-d63325c20b67'})  2025-05-14 02:23:58.172528 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fc7bdc9b-bbf6-5512-af7e-0ab125570579', 'data_vg': 'ceph-fc7bdc9b-bbf6-5512-af7e-0ab125570579'})  2025-05-14 02:23:58.172882 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:23:58.173148 | orchestrator | 2025-05-14 02:23:58.173846 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-14 02:23:58.175333 | orchestrator | Wednesday 14 May 2025 02:23:58 +0000 (0:00:00.125) 0:00:39.394 ********* 2025-05-14 02:23:58.316873 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-22852bcc-228b-503b-9f2d-d63325c20b67', 'data_vg': 'ceph-22852bcc-228b-503b-9f2d-d63325c20b67'})  2025-05-14 02:23:58.317261 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fc7bdc9b-bbf6-5512-af7e-0ab125570579', 'data_vg': 'ceph-fc7bdc9b-bbf6-5512-af7e-0ab125570579'})  2025-05-14 02:23:58.320521 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:23:58.320550 | orchestrator | 2025-05-14 02:23:58.320563 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-14 02:23:58.320575 | orchestrator | Wednesday 14 May 2025 02:23:58 +0000 (0:00:00.142) 0:00:39.537 ********* 2025-05-14 02:23:58.486215 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-22852bcc-228b-503b-9f2d-d63325c20b67', 'data_vg': 'ceph-22852bcc-228b-503b-9f2d-d63325c20b67'})  2025-05-14 02:23:58.487432 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fc7bdc9b-bbf6-5512-af7e-0ab125570579', 'data_vg': 'ceph-fc7bdc9b-bbf6-5512-af7e-0ab125570579'})  2025-05-14 02:23:58.488127 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:23:58.489264 | orchestrator | 2025-05-14 02:23:58.489476 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-14 02:23:58.490175 | orchestrator | Wednesday 14 May 2025 02:23:58 +0000 (0:00:00.168) 0:00:39.705 ********* 2025-05-14 02:23:58.616124 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:23:58.616609 | orchestrator | 2025-05-14 02:23:58.618008 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-14 02:23:58.619678 | orchestrator | Wednesday 14 May 2025 02:23:58 +0000 (0:00:00.130) 0:00:39.836 ********* 2025-05-14 02:23:58.751025 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:23:58.753942 | orchestrator | 2025-05-14 02:23:58.755020 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-14 02:23:58.755709 | orchestrator | Wednesday 14 May 2025 02:23:58 +0000 (0:00:00.134) 0:00:39.970 ********* 2025-05-14 02:23:58.858873 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:23:58.860199 | orchestrator | 2025-05-14 02:23:58.860468 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-14 02:23:58.861196 | orchestrator | Wednesday 14 May 2025 02:23:58 +0000 (0:00:00.108) 0:00:40.079 ********* 2025-05-14 02:23:58.997777 | orchestrator | ok: [testbed-node-4] => { 2025-05-14 02:23:58.998773 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-14 02:23:58.999640 | orchestrator | } 2025-05-14 02:23:59.000594 | orchestrator | 2025-05-14 02:23:59.001007 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-14 02:23:59.003308 | orchestrator | Wednesday 14 May 2025 02:23:58 +0000 (0:00:00.137) 0:00:40.216 ********* 2025-05-14 02:23:59.269966 | orchestrator | ok: [testbed-node-4] => { 2025-05-14 02:23:59.270265 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-14 02:23:59.272077 | orchestrator | } 2025-05-14 02:23:59.272239 | orchestrator | 2025-05-14 02:23:59.273315 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-14 02:23:59.274176 | orchestrator | Wednesday 14 May 2025 02:23:59 +0000 (0:00:00.272) 0:00:40.489 ********* 2025-05-14 02:23:59.426309 | orchestrator | ok: [testbed-node-4] => { 2025-05-14 02:23:59.427111 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-14 02:23:59.428474 | orchestrator | } 2025-05-14 02:23:59.429891 | orchestrator | 2025-05-14 02:23:59.430711 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-14 02:23:59.431382 | orchestrator | Wednesday 14 May 2025 02:23:59 +0000 (0:00:00.157) 0:00:40.646 ********* 2025-05-14 02:24:00.014795 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:24:00.014981 | orchestrator | 2025-05-14 02:24:00.016696 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-14 02:24:00.017344 | orchestrator | Wednesday 14 May 2025 02:24:00 +0000 (0:00:00.584) 0:00:41.230 ********* 2025-05-14 02:24:00.549406 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:24:00.549878 | orchestrator | 2025-05-14 02:24:00.550535 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-14 02:24:00.551178 | orchestrator | Wednesday 14 May 2025 02:24:00 +0000 (0:00:00.537) 0:00:41.768 ********* 2025-05-14 02:24:01.109230 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:24:01.109398 | orchestrator | 2025-05-14 02:24:01.110326 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-14 02:24:01.111034 | orchestrator | Wednesday 14 May 2025 02:24:01 +0000 (0:00:00.561) 0:00:42.329 ********* 2025-05-14 02:24:01.249341 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:24:01.249462 | orchestrator | 2025-05-14 02:24:01.250942 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-14 02:24:01.252178 | orchestrator | Wednesday 14 May 2025 02:24:01 +0000 (0:00:00.140) 0:00:42.470 ********* 2025-05-14 02:24:01.366181 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:24:01.366395 | orchestrator | 2025-05-14 02:24:01.366692 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-14 02:24:01.368019 | orchestrator | Wednesday 14 May 2025 02:24:01 +0000 (0:00:00.110) 0:00:42.580 ********* 2025-05-14 02:24:01.461562 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:24:01.461945 | orchestrator | 2025-05-14 02:24:01.463107 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-14 02:24:01.465020 | orchestrator | Wednesday 14 May 2025 02:24:01 +0000 (0:00:00.102) 0:00:42.683 ********* 2025-05-14 02:24:01.602533 | orchestrator | ok: [testbed-node-4] => { 2025-05-14 02:24:01.603205 | orchestrator |  "vgs_report": { 2025-05-14 02:24:01.604166 | orchestrator |  "vg": [] 2025-05-14 02:24:01.604878 | orchestrator |  } 2025-05-14 02:24:01.605753 | orchestrator | } 2025-05-14 02:24:01.607106 | orchestrator | 2025-05-14 02:24:01.608372 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-14 02:24:01.609331 | orchestrator | Wednesday 14 May 2025 02:24:01 +0000 (0:00:00.138) 0:00:42.821 ********* 2025-05-14 02:24:01.741959 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:24:01.742393 | orchestrator | 2025-05-14 02:24:01.744857 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-14 02:24:01.744885 | orchestrator | Wednesday 14 May 2025 02:24:01 +0000 (0:00:00.141) 0:00:42.962 ********* 2025-05-14 02:24:01.868690 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:24:01.868864 | orchestrator | 2025-05-14 02:24:01.868881 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-14 02:24:01.868986 | orchestrator | Wednesday 14 May 2025 02:24:01 +0000 (0:00:00.127) 0:00:43.090 ********* 2025-05-14 02:24:02.136128 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:24:02.136295 | orchestrator | 2025-05-14 02:24:02.136950 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-14 02:24:02.137262 | orchestrator | Wednesday 14 May 2025 02:24:02 +0000 (0:00:00.266) 0:00:43.356 ********* 2025-05-14 02:24:02.265085 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:24:02.265508 | orchestrator | 2025-05-14 02:24:02.266597 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-14 02:24:02.267131 | orchestrator | Wednesday 14 May 2025 02:24:02 +0000 (0:00:00.128) 0:00:43.485 ********* 2025-05-14 02:24:02.379270 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:24:02.379519 | orchestrator | 2025-05-14 02:24:02.379958 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-14 02:24:02.380648 | orchestrator | Wednesday 14 May 2025 02:24:02 +0000 (0:00:00.115) 0:00:43.600 ********* 2025-05-14 02:24:02.511341 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:24:02.512283 | orchestrator | 2025-05-14 02:24:02.512826 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-14 02:24:02.513593 | orchestrator | Wednesday 14 May 2025 02:24:02 +0000 (0:00:00.131) 0:00:43.732 ********* 2025-05-14 02:24:02.638326 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:24:02.638565 | orchestrator | 2025-05-14 02:24:02.639994 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-14 02:24:02.640656 | orchestrator | Wednesday 14 May 2025 02:24:02 +0000 (0:00:00.125) 0:00:43.857 ********* 2025-05-14 02:24:02.769818 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:24:02.770328 | orchestrator | 2025-05-14 02:24:02.771413 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-14 02:24:02.772379 | orchestrator | Wednesday 14 May 2025 02:24:02 +0000 (0:00:00.133) 0:00:43.991 ********* 2025-05-14 02:24:02.893176 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:24:02.893579 | orchestrator | 2025-05-14 02:24:02.894070 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-14 02:24:02.894887 | orchestrator | Wednesday 14 May 2025 02:24:02 +0000 (0:00:00.122) 0:00:44.113 ********* 2025-05-14 02:24:03.015786 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:24:03.015915 | orchestrator | 2025-05-14 02:24:03.015932 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-14 02:24:03.016067 | orchestrator | Wednesday 14 May 2025 02:24:03 +0000 (0:00:00.122) 0:00:44.236 ********* 2025-05-14 02:24:03.151765 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:24:03.152431 | orchestrator | 2025-05-14 02:24:03.156088 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-14 02:24:03.157340 | orchestrator | Wednesday 14 May 2025 02:24:03 +0000 (0:00:00.135) 0:00:44.372 ********* 2025-05-14 02:24:03.276892 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:24:03.277099 | orchestrator | 2025-05-14 02:24:03.277718 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-14 02:24:03.278619 | orchestrator | Wednesday 14 May 2025 02:24:03 +0000 (0:00:00.124) 0:00:44.496 ********* 2025-05-14 02:24:03.411292 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:24:03.411478 | orchestrator | 2025-05-14 02:24:03.412072 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-14 02:24:03.412534 | orchestrator | Wednesday 14 May 2025 02:24:03 +0000 (0:00:00.135) 0:00:44.632 ********* 2025-05-14 02:24:03.545484 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:24:03.545942 | orchestrator | 2025-05-14 02:24:03.546366 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-14 02:24:03.546846 | orchestrator | Wednesday 14 May 2025 02:24:03 +0000 (0:00:00.134) 0:00:44.766 ********* 2025-05-14 02:24:03.860973 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-22852bcc-228b-503b-9f2d-d63325c20b67', 'data_vg': 'ceph-22852bcc-228b-503b-9f2d-d63325c20b67'})  2025-05-14 02:24:03.861436 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fc7bdc9b-bbf6-5512-af7e-0ab125570579', 'data_vg': 'ceph-fc7bdc9b-bbf6-5512-af7e-0ab125570579'})  2025-05-14 02:24:03.862366 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:24:03.862880 | orchestrator | 2025-05-14 02:24:03.863531 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-14 02:24:03.864303 | orchestrator | Wednesday 14 May 2025 02:24:03 +0000 (0:00:00.315) 0:00:45.081 ********* 2025-05-14 02:24:04.047451 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-22852bcc-228b-503b-9f2d-d63325c20b67', 'data_vg': 'ceph-22852bcc-228b-503b-9f2d-d63325c20b67'})  2025-05-14 02:24:04.047555 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fc7bdc9b-bbf6-5512-af7e-0ab125570579', 'data_vg': 'ceph-fc7bdc9b-bbf6-5512-af7e-0ab125570579'})  2025-05-14 02:24:04.047570 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:24:04.047893 | orchestrator | 2025-05-14 02:24:04.047984 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-14 02:24:04.048295 | orchestrator | Wednesday 14 May 2025 02:24:04 +0000 (0:00:00.187) 0:00:45.268 ********* 2025-05-14 02:24:04.201035 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-22852bcc-228b-503b-9f2d-d63325c20b67', 'data_vg': 'ceph-22852bcc-228b-503b-9f2d-d63325c20b67'})  2025-05-14 02:24:04.201600 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fc7bdc9b-bbf6-5512-af7e-0ab125570579', 'data_vg': 'ceph-fc7bdc9b-bbf6-5512-af7e-0ab125570579'})  2025-05-14 02:24:04.203659 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:24:04.203698 | orchestrator | 2025-05-14 02:24:04.204903 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-14 02:24:04.205461 | orchestrator | Wednesday 14 May 2025 02:24:04 +0000 (0:00:00.152) 0:00:45.421 ********* 2025-05-14 02:24:04.351549 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-22852bcc-228b-503b-9f2d-d63325c20b67', 'data_vg': 'ceph-22852bcc-228b-503b-9f2d-d63325c20b67'})  2025-05-14 02:24:04.352337 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fc7bdc9b-bbf6-5512-af7e-0ab125570579', 'data_vg': 'ceph-fc7bdc9b-bbf6-5512-af7e-0ab125570579'})  2025-05-14 02:24:04.352942 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:24:04.354533 | orchestrator | 2025-05-14 02:24:04.354579 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-14 02:24:04.355278 | orchestrator | Wednesday 14 May 2025 02:24:04 +0000 (0:00:00.150) 0:00:45.572 ********* 2025-05-14 02:24:04.516126 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-22852bcc-228b-503b-9f2d-d63325c20b67', 'data_vg': 'ceph-22852bcc-228b-503b-9f2d-d63325c20b67'})  2025-05-14 02:24:04.516327 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fc7bdc9b-bbf6-5512-af7e-0ab125570579', 'data_vg': 'ceph-fc7bdc9b-bbf6-5512-af7e-0ab125570579'})  2025-05-14 02:24:04.517105 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:24:04.518809 | orchestrator | 2025-05-14 02:24:04.518825 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-14 02:24:04.518830 | orchestrator | Wednesday 14 May 2025 02:24:04 +0000 (0:00:00.164) 0:00:45.737 ********* 2025-05-14 02:24:04.696061 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-22852bcc-228b-503b-9f2d-d63325c20b67', 'data_vg': 'ceph-22852bcc-228b-503b-9f2d-d63325c20b67'})  2025-05-14 02:24:04.696565 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fc7bdc9b-bbf6-5512-af7e-0ab125570579', 'data_vg': 'ceph-fc7bdc9b-bbf6-5512-af7e-0ab125570579'})  2025-05-14 02:24:04.697126 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:24:04.699322 | orchestrator | 2025-05-14 02:24:04.700920 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-14 02:24:04.701605 | orchestrator | Wednesday 14 May 2025 02:24:04 +0000 (0:00:00.178) 0:00:45.916 ********* 2025-05-14 02:24:04.839824 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-22852bcc-228b-503b-9f2d-d63325c20b67', 'data_vg': 'ceph-22852bcc-228b-503b-9f2d-d63325c20b67'})  2025-05-14 02:24:04.840910 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fc7bdc9b-bbf6-5512-af7e-0ab125570579', 'data_vg': 'ceph-fc7bdc9b-bbf6-5512-af7e-0ab125570579'})  2025-05-14 02:24:04.840939 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:24:04.841259 | orchestrator | 2025-05-14 02:24:04.842367 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-14 02:24:04.842614 | orchestrator | Wednesday 14 May 2025 02:24:04 +0000 (0:00:00.143) 0:00:46.059 ********* 2025-05-14 02:24:04.999951 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-22852bcc-228b-503b-9f2d-d63325c20b67', 'data_vg': 'ceph-22852bcc-228b-503b-9f2d-d63325c20b67'})  2025-05-14 02:24:05.000796 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fc7bdc9b-bbf6-5512-af7e-0ab125570579', 'data_vg': 'ceph-fc7bdc9b-bbf6-5512-af7e-0ab125570579'})  2025-05-14 02:24:05.001855 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:24:05.002626 | orchestrator | 2025-05-14 02:24:05.004231 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-14 02:24:05.005128 | orchestrator | Wednesday 14 May 2025 02:24:04 +0000 (0:00:00.160) 0:00:46.220 ********* 2025-05-14 02:24:05.534283 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:24:05.534495 | orchestrator | 2025-05-14 02:24:05.535144 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-14 02:24:05.535232 | orchestrator | Wednesday 14 May 2025 02:24:05 +0000 (0:00:00.534) 0:00:46.754 ********* 2025-05-14 02:24:06.118337 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:24:06.118466 | orchestrator | 2025-05-14 02:24:06.120400 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-14 02:24:06.120433 | orchestrator | Wednesday 14 May 2025 02:24:06 +0000 (0:00:00.583) 0:00:47.337 ********* 2025-05-14 02:24:06.244108 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:24:06.245235 | orchestrator | 2025-05-14 02:24:06.245322 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-14 02:24:06.245696 | orchestrator | Wednesday 14 May 2025 02:24:06 +0000 (0:00:00.127) 0:00:47.465 ********* 2025-05-14 02:24:06.593268 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-22852bcc-228b-503b-9f2d-d63325c20b67', 'vg_name': 'ceph-22852bcc-228b-503b-9f2d-d63325c20b67'}) 2025-05-14 02:24:06.593455 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-fc7bdc9b-bbf6-5512-af7e-0ab125570579', 'vg_name': 'ceph-fc7bdc9b-bbf6-5512-af7e-0ab125570579'}) 2025-05-14 02:24:06.593476 | orchestrator | 2025-05-14 02:24:06.595037 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-14 02:24:06.595548 | orchestrator | Wednesday 14 May 2025 02:24:06 +0000 (0:00:00.348) 0:00:47.814 ********* 2025-05-14 02:24:06.764545 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-22852bcc-228b-503b-9f2d-d63325c20b67', 'data_vg': 'ceph-22852bcc-228b-503b-9f2d-d63325c20b67'})  2025-05-14 02:24:06.765112 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fc7bdc9b-bbf6-5512-af7e-0ab125570579', 'data_vg': 'ceph-fc7bdc9b-bbf6-5512-af7e-0ab125570579'})  2025-05-14 02:24:06.765382 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:24:06.766402 | orchestrator | 2025-05-14 02:24:06.766715 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-14 02:24:06.767460 | orchestrator | Wednesday 14 May 2025 02:24:06 +0000 (0:00:00.167) 0:00:47.982 ********* 2025-05-14 02:24:06.952109 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-22852bcc-228b-503b-9f2d-d63325c20b67', 'data_vg': 'ceph-22852bcc-228b-503b-9f2d-d63325c20b67'})  2025-05-14 02:24:06.953262 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fc7bdc9b-bbf6-5512-af7e-0ab125570579', 'data_vg': 'ceph-fc7bdc9b-bbf6-5512-af7e-0ab125570579'})  2025-05-14 02:24:06.954283 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:24:06.954994 | orchestrator | 2025-05-14 02:24:06.955927 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-14 02:24:06.956579 | orchestrator | Wednesday 14 May 2025 02:24:06 +0000 (0:00:00.189) 0:00:48.171 ********* 2025-05-14 02:24:07.139251 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-22852bcc-228b-503b-9f2d-d63325c20b67', 'data_vg': 'ceph-22852bcc-228b-503b-9f2d-d63325c20b67'})  2025-05-14 02:24:07.139807 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-fc7bdc9b-bbf6-5512-af7e-0ab125570579', 'data_vg': 'ceph-fc7bdc9b-bbf6-5512-af7e-0ab125570579'})  2025-05-14 02:24:07.139842 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:24:07.140264 | orchestrator | 2025-05-14 02:24:07.140362 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-14 02:24:07.140763 | orchestrator | Wednesday 14 May 2025 02:24:07 +0000 (0:00:00.188) 0:00:48.359 ********* 2025-05-14 02:24:08.051410 | orchestrator | ok: [testbed-node-4] => { 2025-05-14 02:24:08.053912 | orchestrator |  "lvm_report": { 2025-05-14 02:24:08.055051 | orchestrator |  "lv": [ 2025-05-14 02:24:08.055815 | orchestrator |  { 2025-05-14 02:24:08.057190 | orchestrator |  "lv_name": "osd-block-22852bcc-228b-503b-9f2d-d63325c20b67", 2025-05-14 02:24:08.058201 | orchestrator |  "vg_name": "ceph-22852bcc-228b-503b-9f2d-d63325c20b67" 2025-05-14 02:24:08.058599 | orchestrator |  }, 2025-05-14 02:24:08.059404 | orchestrator |  { 2025-05-14 02:24:08.060130 | orchestrator |  "lv_name": "osd-block-fc7bdc9b-bbf6-5512-af7e-0ab125570579", 2025-05-14 02:24:08.061027 | orchestrator |  "vg_name": "ceph-fc7bdc9b-bbf6-5512-af7e-0ab125570579" 2025-05-14 02:24:08.061825 | orchestrator |  } 2025-05-14 02:24:08.062290 | orchestrator |  ], 2025-05-14 02:24:08.063114 | orchestrator |  "pv": [ 2025-05-14 02:24:08.064300 | orchestrator |  { 2025-05-14 02:24:08.064883 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-14 02:24:08.065595 | orchestrator |  "vg_name": "ceph-22852bcc-228b-503b-9f2d-d63325c20b67" 2025-05-14 02:24:08.066341 | orchestrator |  }, 2025-05-14 02:24:08.067495 | orchestrator |  { 2025-05-14 02:24:08.067619 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-14 02:24:08.069872 | orchestrator |  "vg_name": "ceph-fc7bdc9b-bbf6-5512-af7e-0ab125570579" 2025-05-14 02:24:08.070904 | orchestrator |  } 2025-05-14 02:24:08.071824 | orchestrator |  ] 2025-05-14 02:24:08.073013 | orchestrator |  } 2025-05-14 02:24:08.073560 | orchestrator | } 2025-05-14 02:24:08.074394 | orchestrator | 2025-05-14 02:24:08.075152 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-14 02:24:08.075697 | orchestrator | 2025-05-14 02:24:08.076380 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-14 02:24:08.077500 | orchestrator | Wednesday 14 May 2025 02:24:08 +0000 (0:00:00.910) 0:00:49.270 ********* 2025-05-14 02:24:08.329613 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-14 02:24:08.331667 | orchestrator | 2025-05-14 02:24:08.337359 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-14 02:24:08.337421 | orchestrator | Wednesday 14 May 2025 02:24:08 +0000 (0:00:00.274) 0:00:49.544 ********* 2025-05-14 02:24:08.605271 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:24:08.606973 | orchestrator | 2025-05-14 02:24:08.609685 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:24:08.609998 | orchestrator | Wednesday 14 May 2025 02:24:08 +0000 (0:00:00.279) 0:00:49.824 ********* 2025-05-14 02:24:09.113871 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-05-14 02:24:09.115292 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-05-14 02:24:09.117666 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-05-14 02:24:09.117701 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-05-14 02:24:09.119383 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-05-14 02:24:09.120586 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-05-14 02:24:09.121141 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-05-14 02:24:09.121817 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-05-14 02:24:09.122781 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-05-14 02:24:09.123662 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-05-14 02:24:09.124223 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-05-14 02:24:09.125194 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-05-14 02:24:09.125687 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-05-14 02:24:09.126802 | orchestrator | 2025-05-14 02:24:09.127061 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:24:09.127382 | orchestrator | Wednesday 14 May 2025 02:24:09 +0000 (0:00:00.507) 0:00:50.332 ********* 2025-05-14 02:24:09.332505 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:09.333361 | orchestrator | 2025-05-14 02:24:09.333405 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:24:09.333421 | orchestrator | Wednesday 14 May 2025 02:24:09 +0000 (0:00:00.219) 0:00:50.552 ********* 2025-05-14 02:24:09.570655 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:09.572165 | orchestrator | 2025-05-14 02:24:09.575610 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:24:09.576165 | orchestrator | Wednesday 14 May 2025 02:24:09 +0000 (0:00:00.238) 0:00:50.791 ********* 2025-05-14 02:24:09.791905 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:09.795079 | orchestrator | 2025-05-14 02:24:09.795191 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:24:09.795207 | orchestrator | Wednesday 14 May 2025 02:24:09 +0000 (0:00:00.217) 0:00:51.008 ********* 2025-05-14 02:24:10.026661 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:10.027036 | orchestrator | 2025-05-14 02:24:10.028984 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:24:10.029130 | orchestrator | Wednesday 14 May 2025 02:24:10 +0000 (0:00:00.238) 0:00:51.246 ********* 2025-05-14 02:24:10.317960 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:10.318161 | orchestrator | 2025-05-14 02:24:10.319364 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:24:10.320281 | orchestrator | Wednesday 14 May 2025 02:24:10 +0000 (0:00:00.288) 0:00:51.535 ********* 2025-05-14 02:24:11.028167 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:11.028272 | orchestrator | 2025-05-14 02:24:11.028548 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:24:11.031810 | orchestrator | Wednesday 14 May 2025 02:24:11 +0000 (0:00:00.710) 0:00:52.246 ********* 2025-05-14 02:24:11.248595 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:11.248935 | orchestrator | 2025-05-14 02:24:11.250096 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:24:11.250684 | orchestrator | Wednesday 14 May 2025 02:24:11 +0000 (0:00:00.221) 0:00:52.468 ********* 2025-05-14 02:24:11.476788 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:11.478217 | orchestrator | 2025-05-14 02:24:11.479830 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:24:11.482783 | orchestrator | Wednesday 14 May 2025 02:24:11 +0000 (0:00:00.228) 0:00:52.696 ********* 2025-05-14 02:24:12.019500 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e2db2252-8503-4549-bea5-ecd40c91a84d) 2025-05-14 02:24:12.019840 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e2db2252-8503-4549-bea5-ecd40c91a84d) 2025-05-14 02:24:12.020672 | orchestrator | 2025-05-14 02:24:12.021845 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:24:12.022694 | orchestrator | Wednesday 14 May 2025 02:24:12 +0000 (0:00:00.540) 0:00:53.237 ********* 2025-05-14 02:24:12.621268 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_dfedfdfd-f02f-46ee-b152-0d1db465af93) 2025-05-14 02:24:12.621579 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_dfedfdfd-f02f-46ee-b152-0d1db465af93) 2025-05-14 02:24:12.622511 | orchestrator | 2025-05-14 02:24:12.623116 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:24:12.623523 | orchestrator | Wednesday 14 May 2025 02:24:12 +0000 (0:00:00.603) 0:00:53.840 ********* 2025-05-14 02:24:13.185170 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b728a659-cffd-44e0-b567-754457aa92dd) 2025-05-14 02:24:13.185272 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b728a659-cffd-44e0-b567-754457aa92dd) 2025-05-14 02:24:13.186343 | orchestrator | 2025-05-14 02:24:13.187167 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:24:13.187839 | orchestrator | Wednesday 14 May 2025 02:24:13 +0000 (0:00:00.562) 0:00:54.403 ********* 2025-05-14 02:24:13.658974 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0315b34d-7399-4bf5-aad0-c6c82dbe1c9e) 2025-05-14 02:24:13.659259 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0315b34d-7399-4bf5-aad0-c6c82dbe1c9e) 2025-05-14 02:24:13.660095 | orchestrator | 2025-05-14 02:24:13.660874 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-14 02:24:13.661443 | orchestrator | Wednesday 14 May 2025 02:24:13 +0000 (0:00:00.474) 0:00:54.878 ********* 2025-05-14 02:24:14.038497 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-14 02:24:14.038629 | orchestrator | 2025-05-14 02:24:14.039150 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:24:14.039650 | orchestrator | Wednesday 14 May 2025 02:24:14 +0000 (0:00:00.379) 0:00:55.257 ********* 2025-05-14 02:24:14.527702 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-05-14 02:24:14.527882 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-05-14 02:24:14.528163 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-05-14 02:24:14.528599 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-05-14 02:24:14.529720 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-05-14 02:24:14.530069 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-05-14 02:24:14.530096 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-05-14 02:24:14.530468 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-05-14 02:24:14.530694 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-05-14 02:24:14.531049 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-05-14 02:24:14.531199 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-05-14 02:24:14.531683 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-05-14 02:24:14.531922 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-05-14 02:24:14.533203 | orchestrator | 2025-05-14 02:24:14.533228 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:24:14.533234 | orchestrator | Wednesday 14 May 2025 02:24:14 +0000 (0:00:00.491) 0:00:55.748 ********* 2025-05-14 02:24:15.172461 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:15.173368 | orchestrator | 2025-05-14 02:24:15.174004 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:24:15.174695 | orchestrator | Wednesday 14 May 2025 02:24:15 +0000 (0:00:00.642) 0:00:56.391 ********* 2025-05-14 02:24:15.372032 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:15.372212 | orchestrator | 2025-05-14 02:24:15.373168 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:24:15.374182 | orchestrator | Wednesday 14 May 2025 02:24:15 +0000 (0:00:00.200) 0:00:56.592 ********* 2025-05-14 02:24:15.579498 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:15.579593 | orchestrator | 2025-05-14 02:24:15.579630 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:24:15.580470 | orchestrator | Wednesday 14 May 2025 02:24:15 +0000 (0:00:00.207) 0:00:56.799 ********* 2025-05-14 02:24:15.803938 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:15.804076 | orchestrator | 2025-05-14 02:24:15.804406 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:24:15.804912 | orchestrator | Wednesday 14 May 2025 02:24:15 +0000 (0:00:00.225) 0:00:57.024 ********* 2025-05-14 02:24:16.012384 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:16.017903 | orchestrator | 2025-05-14 02:24:16.019024 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:24:16.020413 | orchestrator | Wednesday 14 May 2025 02:24:16 +0000 (0:00:00.205) 0:00:57.230 ********* 2025-05-14 02:24:16.241324 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:16.241796 | orchestrator | 2025-05-14 02:24:16.243013 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:24:16.243552 | orchestrator | Wednesday 14 May 2025 02:24:16 +0000 (0:00:00.231) 0:00:57.462 ********* 2025-05-14 02:24:16.422188 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:16.423039 | orchestrator | 2025-05-14 02:24:16.423658 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:24:16.425397 | orchestrator | Wednesday 14 May 2025 02:24:16 +0000 (0:00:00.180) 0:00:57.642 ********* 2025-05-14 02:24:16.614457 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:16.614582 | orchestrator | 2025-05-14 02:24:16.614622 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:24:16.614924 | orchestrator | Wednesday 14 May 2025 02:24:16 +0000 (0:00:00.193) 0:00:57.835 ********* 2025-05-14 02:24:17.379906 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-05-14 02:24:17.382416 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-05-14 02:24:17.382469 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-05-14 02:24:17.382480 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-05-14 02:24:17.382491 | orchestrator | 2025-05-14 02:24:17.382829 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:24:17.383571 | orchestrator | Wednesday 14 May 2025 02:24:17 +0000 (0:00:00.763) 0:00:58.599 ********* 2025-05-14 02:24:17.568386 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:17.569341 | orchestrator | 2025-05-14 02:24:17.570102 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:24:17.570661 | orchestrator | Wednesday 14 May 2025 02:24:17 +0000 (0:00:00.188) 0:00:58.787 ********* 2025-05-14 02:24:18.096958 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:18.097277 | orchestrator | 2025-05-14 02:24:18.098101 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:24:18.099586 | orchestrator | Wednesday 14 May 2025 02:24:18 +0000 (0:00:00.527) 0:00:59.315 ********* 2025-05-14 02:24:18.266457 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:18.266987 | orchestrator | 2025-05-14 02:24:18.267892 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-14 02:24:18.268843 | orchestrator | Wednesday 14 May 2025 02:24:18 +0000 (0:00:00.171) 0:00:59.487 ********* 2025-05-14 02:24:18.452937 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:18.453169 | orchestrator | 2025-05-14 02:24:18.453918 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-14 02:24:18.454546 | orchestrator | Wednesday 14 May 2025 02:24:18 +0000 (0:00:00.186) 0:00:59.673 ********* 2025-05-14 02:24:18.604393 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:18.604523 | orchestrator | 2025-05-14 02:24:18.604536 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-14 02:24:18.605284 | orchestrator | Wednesday 14 May 2025 02:24:18 +0000 (0:00:00.148) 0:00:59.822 ********* 2025-05-14 02:24:18.808859 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4aa0a295-50da-5a6e-9e1c-976797741e16'}}) 2025-05-14 02:24:18.810810 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '19540cc4-3279-5090-817a-02eeffb19a16'}}) 2025-05-14 02:24:18.810884 | orchestrator | 2025-05-14 02:24:18.810908 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-14 02:24:18.810929 | orchestrator | Wednesday 14 May 2025 02:24:18 +0000 (0:00:00.207) 0:01:00.030 ********* 2025-05-14 02:24:20.595646 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-4aa0a295-50da-5a6e-9e1c-976797741e16', 'data_vg': 'ceph-4aa0a295-50da-5a6e-9e1c-976797741e16'}) 2025-05-14 02:24:20.595985 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-19540cc4-3279-5090-817a-02eeffb19a16', 'data_vg': 'ceph-19540cc4-3279-5090-817a-02eeffb19a16'}) 2025-05-14 02:24:20.596756 | orchestrator | 2025-05-14 02:24:20.597534 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-14 02:24:20.597667 | orchestrator | Wednesday 14 May 2025 02:24:20 +0000 (0:00:01.784) 0:01:01.814 ********* 2025-05-14 02:24:20.774598 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4aa0a295-50da-5a6e-9e1c-976797741e16', 'data_vg': 'ceph-4aa0a295-50da-5a6e-9e1c-976797741e16'})  2025-05-14 02:24:20.775011 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-19540cc4-3279-5090-817a-02eeffb19a16', 'data_vg': 'ceph-19540cc4-3279-5090-817a-02eeffb19a16'})  2025-05-14 02:24:20.776078 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:20.777284 | orchestrator | 2025-05-14 02:24:20.778127 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-14 02:24:20.778966 | orchestrator | Wednesday 14 May 2025 02:24:20 +0000 (0:00:00.179) 0:01:01.994 ********* 2025-05-14 02:24:22.069502 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-4aa0a295-50da-5a6e-9e1c-976797741e16', 'data_vg': 'ceph-4aa0a295-50da-5a6e-9e1c-976797741e16'}) 2025-05-14 02:24:22.069787 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-19540cc4-3279-5090-817a-02eeffb19a16', 'data_vg': 'ceph-19540cc4-3279-5090-817a-02eeffb19a16'}) 2025-05-14 02:24:22.070520 | orchestrator | 2025-05-14 02:24:22.071965 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-14 02:24:22.071981 | orchestrator | Wednesday 14 May 2025 02:24:22 +0000 (0:00:01.293) 0:01:03.288 ********* 2025-05-14 02:24:22.244529 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4aa0a295-50da-5a6e-9e1c-976797741e16', 'data_vg': 'ceph-4aa0a295-50da-5a6e-9e1c-976797741e16'})  2025-05-14 02:24:22.245417 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-19540cc4-3279-5090-817a-02eeffb19a16', 'data_vg': 'ceph-19540cc4-3279-5090-817a-02eeffb19a16'})  2025-05-14 02:24:22.245932 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:22.247275 | orchestrator | 2025-05-14 02:24:22.249470 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-14 02:24:22.249536 | orchestrator | Wednesday 14 May 2025 02:24:22 +0000 (0:00:00.176) 0:01:03.464 ********* 2025-05-14 02:24:22.598846 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:22.598972 | orchestrator | 2025-05-14 02:24:22.599182 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-14 02:24:22.600129 | orchestrator | Wednesday 14 May 2025 02:24:22 +0000 (0:00:00.353) 0:01:03.818 ********* 2025-05-14 02:24:22.775843 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4aa0a295-50da-5a6e-9e1c-976797741e16', 'data_vg': 'ceph-4aa0a295-50da-5a6e-9e1c-976797741e16'})  2025-05-14 02:24:22.776364 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-19540cc4-3279-5090-817a-02eeffb19a16', 'data_vg': 'ceph-19540cc4-3279-5090-817a-02eeffb19a16'})  2025-05-14 02:24:22.777699 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:22.778813 | orchestrator | 2025-05-14 02:24:22.780345 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-14 02:24:22.782218 | orchestrator | Wednesday 14 May 2025 02:24:22 +0000 (0:00:00.175) 0:01:03.994 ********* 2025-05-14 02:24:22.924809 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:22.925262 | orchestrator | 2025-05-14 02:24:22.926461 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-14 02:24:22.927027 | orchestrator | Wednesday 14 May 2025 02:24:22 +0000 (0:00:00.150) 0:01:04.144 ********* 2025-05-14 02:24:23.132271 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4aa0a295-50da-5a6e-9e1c-976797741e16', 'data_vg': 'ceph-4aa0a295-50da-5a6e-9e1c-976797741e16'})  2025-05-14 02:24:23.132382 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-19540cc4-3279-5090-817a-02eeffb19a16', 'data_vg': 'ceph-19540cc4-3279-5090-817a-02eeffb19a16'})  2025-05-14 02:24:23.133082 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:23.134357 | orchestrator | 2025-05-14 02:24:23.135109 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-14 02:24:23.136160 | orchestrator | Wednesday 14 May 2025 02:24:23 +0000 (0:00:00.205) 0:01:04.350 ********* 2025-05-14 02:24:23.267479 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:23.267684 | orchestrator | 2025-05-14 02:24:23.268964 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-14 02:24:23.269310 | orchestrator | Wednesday 14 May 2025 02:24:23 +0000 (0:00:00.136) 0:01:04.487 ********* 2025-05-14 02:24:23.468041 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4aa0a295-50da-5a6e-9e1c-976797741e16', 'data_vg': 'ceph-4aa0a295-50da-5a6e-9e1c-976797741e16'})  2025-05-14 02:24:23.469919 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-19540cc4-3279-5090-817a-02eeffb19a16', 'data_vg': 'ceph-19540cc4-3279-5090-817a-02eeffb19a16'})  2025-05-14 02:24:23.470486 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:23.471534 | orchestrator | 2025-05-14 02:24:23.474814 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-14 02:24:23.474841 | orchestrator | Wednesday 14 May 2025 02:24:23 +0000 (0:00:00.196) 0:01:04.684 ********* 2025-05-14 02:24:23.610988 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:24:23.611421 | orchestrator | 2025-05-14 02:24:23.612187 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-14 02:24:23.613217 | orchestrator | Wednesday 14 May 2025 02:24:23 +0000 (0:00:00.148) 0:01:04.832 ********* 2025-05-14 02:24:23.792104 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4aa0a295-50da-5a6e-9e1c-976797741e16', 'data_vg': 'ceph-4aa0a295-50da-5a6e-9e1c-976797741e16'})  2025-05-14 02:24:23.792574 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-19540cc4-3279-5090-817a-02eeffb19a16', 'data_vg': 'ceph-19540cc4-3279-5090-817a-02eeffb19a16'})  2025-05-14 02:24:23.796094 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:23.796155 | orchestrator | 2025-05-14 02:24:23.796213 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-14 02:24:23.797152 | orchestrator | Wednesday 14 May 2025 02:24:23 +0000 (0:00:00.178) 0:01:05.011 ********* 2025-05-14 02:24:23.973896 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4aa0a295-50da-5a6e-9e1c-976797741e16', 'data_vg': 'ceph-4aa0a295-50da-5a6e-9e1c-976797741e16'})  2025-05-14 02:24:23.975476 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-19540cc4-3279-5090-817a-02eeffb19a16', 'data_vg': 'ceph-19540cc4-3279-5090-817a-02eeffb19a16'})  2025-05-14 02:24:23.976864 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:23.976912 | orchestrator | 2025-05-14 02:24:23.977581 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-14 02:24:23.978312 | orchestrator | Wednesday 14 May 2025 02:24:23 +0000 (0:00:00.180) 0:01:05.191 ********* 2025-05-14 02:24:24.145270 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4aa0a295-50da-5a6e-9e1c-976797741e16', 'data_vg': 'ceph-4aa0a295-50da-5a6e-9e1c-976797741e16'})  2025-05-14 02:24:24.145616 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-19540cc4-3279-5090-817a-02eeffb19a16', 'data_vg': 'ceph-19540cc4-3279-5090-817a-02eeffb19a16'})  2025-05-14 02:24:24.146825 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:24.147139 | orchestrator | 2025-05-14 02:24:24.147878 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-14 02:24:24.150371 | orchestrator | Wednesday 14 May 2025 02:24:24 +0000 (0:00:00.174) 0:01:05.365 ********* 2025-05-14 02:24:24.281980 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:24.282877 | orchestrator | 2025-05-14 02:24:24.283719 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-14 02:24:24.284945 | orchestrator | Wednesday 14 May 2025 02:24:24 +0000 (0:00:00.136) 0:01:05.502 ********* 2025-05-14 02:24:24.421586 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:24.422651 | orchestrator | 2025-05-14 02:24:24.423315 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-14 02:24:24.424928 | orchestrator | Wednesday 14 May 2025 02:24:24 +0000 (0:00:00.139) 0:01:05.641 ********* 2025-05-14 02:24:24.793721 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:24.794651 | orchestrator | 2025-05-14 02:24:24.795313 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-14 02:24:24.796388 | orchestrator | Wednesday 14 May 2025 02:24:24 +0000 (0:00:00.366) 0:01:06.008 ********* 2025-05-14 02:24:24.948504 | orchestrator | ok: [testbed-node-5] => { 2025-05-14 02:24:24.948841 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-14 02:24:24.949286 | orchestrator | } 2025-05-14 02:24:24.950364 | orchestrator | 2025-05-14 02:24:24.951432 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-14 02:24:24.951634 | orchestrator | Wednesday 14 May 2025 02:24:24 +0000 (0:00:00.160) 0:01:06.168 ********* 2025-05-14 02:24:25.095466 | orchestrator | ok: [testbed-node-5] => { 2025-05-14 02:24:25.095947 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-14 02:24:25.096604 | orchestrator | } 2025-05-14 02:24:25.097254 | orchestrator | 2025-05-14 02:24:25.097846 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-14 02:24:25.098169 | orchestrator | Wednesday 14 May 2025 02:24:25 +0000 (0:00:00.146) 0:01:06.315 ********* 2025-05-14 02:24:25.248524 | orchestrator | ok: [testbed-node-5] => { 2025-05-14 02:24:25.249636 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-14 02:24:25.251157 | orchestrator | } 2025-05-14 02:24:25.252986 | orchestrator | 2025-05-14 02:24:25.254355 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-14 02:24:25.255142 | orchestrator | Wednesday 14 May 2025 02:24:25 +0000 (0:00:00.153) 0:01:06.468 ********* 2025-05-14 02:24:25.808407 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:24:25.809038 | orchestrator | 2025-05-14 02:24:25.810294 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-14 02:24:25.811505 | orchestrator | Wednesday 14 May 2025 02:24:25 +0000 (0:00:00.555) 0:01:07.024 ********* 2025-05-14 02:24:26.312104 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:24:26.313247 | orchestrator | 2025-05-14 02:24:26.314169 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-14 02:24:26.314715 | orchestrator | Wednesday 14 May 2025 02:24:26 +0000 (0:00:00.507) 0:01:07.531 ********* 2025-05-14 02:24:26.828609 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:24:26.830807 | orchestrator | 2025-05-14 02:24:26.833330 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-14 02:24:26.833696 | orchestrator | Wednesday 14 May 2025 02:24:26 +0000 (0:00:00.515) 0:01:08.046 ********* 2025-05-14 02:24:26.994291 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:24:26.994391 | orchestrator | 2025-05-14 02:24:26.994812 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-14 02:24:26.995609 | orchestrator | Wednesday 14 May 2025 02:24:26 +0000 (0:00:00.167) 0:01:08.214 ********* 2025-05-14 02:24:27.135285 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:27.135381 | orchestrator | 2025-05-14 02:24:27.135936 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-14 02:24:27.137691 | orchestrator | Wednesday 14 May 2025 02:24:27 +0000 (0:00:00.138) 0:01:08.352 ********* 2025-05-14 02:24:27.262710 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:27.263384 | orchestrator | 2025-05-14 02:24:27.264269 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-14 02:24:27.265114 | orchestrator | Wednesday 14 May 2025 02:24:27 +0000 (0:00:00.130) 0:01:08.483 ********* 2025-05-14 02:24:27.655943 | orchestrator | ok: [testbed-node-5] => { 2025-05-14 02:24:27.656385 | orchestrator |  "vgs_report": { 2025-05-14 02:24:27.657710 | orchestrator |  "vg": [] 2025-05-14 02:24:27.658718 | orchestrator |  } 2025-05-14 02:24:27.659937 | orchestrator | } 2025-05-14 02:24:27.660228 | orchestrator | 2025-05-14 02:24:27.660810 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-14 02:24:27.661678 | orchestrator | Wednesday 14 May 2025 02:24:27 +0000 (0:00:00.391) 0:01:08.875 ********* 2025-05-14 02:24:27.836187 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:27.836850 | orchestrator | 2025-05-14 02:24:27.837996 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-14 02:24:27.838612 | orchestrator | Wednesday 14 May 2025 02:24:27 +0000 (0:00:00.178) 0:01:09.053 ********* 2025-05-14 02:24:27.997620 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:27.998966 | orchestrator | 2025-05-14 02:24:27.999052 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-14 02:24:28.001124 | orchestrator | Wednesday 14 May 2025 02:24:27 +0000 (0:00:00.163) 0:01:09.217 ********* 2025-05-14 02:24:28.138188 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:28.138621 | orchestrator | 2025-05-14 02:24:28.139218 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-14 02:24:28.139961 | orchestrator | Wednesday 14 May 2025 02:24:28 +0000 (0:00:00.140) 0:01:09.357 ********* 2025-05-14 02:24:28.284093 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:28.284448 | orchestrator | 2025-05-14 02:24:28.286527 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-14 02:24:28.287006 | orchestrator | Wednesday 14 May 2025 02:24:28 +0000 (0:00:00.146) 0:01:09.504 ********* 2025-05-14 02:24:28.437185 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:28.437967 | orchestrator | 2025-05-14 02:24:28.438502 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-14 02:24:28.440358 | orchestrator | Wednesday 14 May 2025 02:24:28 +0000 (0:00:00.151) 0:01:09.655 ********* 2025-05-14 02:24:28.580632 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:28.580798 | orchestrator | 2025-05-14 02:24:28.581441 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-14 02:24:28.584080 | orchestrator | Wednesday 14 May 2025 02:24:28 +0000 (0:00:00.144) 0:01:09.800 ********* 2025-05-14 02:24:28.718489 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:28.718655 | orchestrator | 2025-05-14 02:24:28.719960 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-14 02:24:28.720990 | orchestrator | Wednesday 14 May 2025 02:24:28 +0000 (0:00:00.136) 0:01:09.937 ********* 2025-05-14 02:24:28.872005 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:28.872497 | orchestrator | 2025-05-14 02:24:28.873291 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-14 02:24:28.874127 | orchestrator | Wednesday 14 May 2025 02:24:28 +0000 (0:00:00.151) 0:01:10.089 ********* 2025-05-14 02:24:29.021508 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:29.023174 | orchestrator | 2025-05-14 02:24:29.024456 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-14 02:24:29.025089 | orchestrator | Wednesday 14 May 2025 02:24:29 +0000 (0:00:00.152) 0:01:10.241 ********* 2025-05-14 02:24:29.160591 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:29.161085 | orchestrator | 2025-05-14 02:24:29.162291 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-14 02:24:29.162927 | orchestrator | Wednesday 14 May 2025 02:24:29 +0000 (0:00:00.139) 0:01:10.381 ********* 2025-05-14 02:24:29.314258 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:29.315706 | orchestrator | 2025-05-14 02:24:29.316695 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-14 02:24:29.317568 | orchestrator | Wednesday 14 May 2025 02:24:29 +0000 (0:00:00.152) 0:01:10.534 ********* 2025-05-14 02:24:29.682851 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:29.686434 | orchestrator | 2025-05-14 02:24:29.686476 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-14 02:24:29.686490 | orchestrator | Wednesday 14 May 2025 02:24:29 +0000 (0:00:00.367) 0:01:10.902 ********* 2025-05-14 02:24:29.841885 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:29.842260 | orchestrator | 2025-05-14 02:24:29.844083 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-14 02:24:29.844763 | orchestrator | Wednesday 14 May 2025 02:24:29 +0000 (0:00:00.158) 0:01:11.060 ********* 2025-05-14 02:24:29.983949 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:29.986663 | orchestrator | 2025-05-14 02:24:29.986980 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-14 02:24:29.987512 | orchestrator | Wednesday 14 May 2025 02:24:29 +0000 (0:00:00.143) 0:01:11.204 ********* 2025-05-14 02:24:30.180188 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4aa0a295-50da-5a6e-9e1c-976797741e16', 'data_vg': 'ceph-4aa0a295-50da-5a6e-9e1c-976797741e16'})  2025-05-14 02:24:30.181027 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-19540cc4-3279-5090-817a-02eeffb19a16', 'data_vg': 'ceph-19540cc4-3279-5090-817a-02eeffb19a16'})  2025-05-14 02:24:30.181798 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:30.182814 | orchestrator | 2025-05-14 02:24:30.183966 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-14 02:24:30.185116 | orchestrator | Wednesday 14 May 2025 02:24:30 +0000 (0:00:00.195) 0:01:11.399 ********* 2025-05-14 02:24:30.369793 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4aa0a295-50da-5a6e-9e1c-976797741e16', 'data_vg': 'ceph-4aa0a295-50da-5a6e-9e1c-976797741e16'})  2025-05-14 02:24:30.370962 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-19540cc4-3279-5090-817a-02eeffb19a16', 'data_vg': 'ceph-19540cc4-3279-5090-817a-02eeffb19a16'})  2025-05-14 02:24:30.371764 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:30.374386 | orchestrator | 2025-05-14 02:24:30.374428 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-14 02:24:30.374464 | orchestrator | Wednesday 14 May 2025 02:24:30 +0000 (0:00:00.188) 0:01:11.588 ********* 2025-05-14 02:24:30.545583 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4aa0a295-50da-5a6e-9e1c-976797741e16', 'data_vg': 'ceph-4aa0a295-50da-5a6e-9e1c-976797741e16'})  2025-05-14 02:24:30.545801 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-19540cc4-3279-5090-817a-02eeffb19a16', 'data_vg': 'ceph-19540cc4-3279-5090-817a-02eeffb19a16'})  2025-05-14 02:24:30.546446 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:30.547442 | orchestrator | 2025-05-14 02:24:30.547982 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-14 02:24:30.548360 | orchestrator | Wednesday 14 May 2025 02:24:30 +0000 (0:00:00.176) 0:01:11.765 ********* 2025-05-14 02:24:30.721623 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4aa0a295-50da-5a6e-9e1c-976797741e16', 'data_vg': 'ceph-4aa0a295-50da-5a6e-9e1c-976797741e16'})  2025-05-14 02:24:30.721711 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-19540cc4-3279-5090-817a-02eeffb19a16', 'data_vg': 'ceph-19540cc4-3279-5090-817a-02eeffb19a16'})  2025-05-14 02:24:30.721723 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:30.721756 | orchestrator | 2025-05-14 02:24:30.721769 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-14 02:24:30.722447 | orchestrator | Wednesday 14 May 2025 02:24:30 +0000 (0:00:00.174) 0:01:11.940 ********* 2025-05-14 02:24:30.916412 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4aa0a295-50da-5a6e-9e1c-976797741e16', 'data_vg': 'ceph-4aa0a295-50da-5a6e-9e1c-976797741e16'})  2025-05-14 02:24:30.918710 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-19540cc4-3279-5090-817a-02eeffb19a16', 'data_vg': 'ceph-19540cc4-3279-5090-817a-02eeffb19a16'})  2025-05-14 02:24:30.918789 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:30.918919 | orchestrator | 2025-05-14 02:24:30.919266 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-14 02:24:30.920122 | orchestrator | Wednesday 14 May 2025 02:24:30 +0000 (0:00:00.193) 0:01:12.133 ********* 2025-05-14 02:24:31.077638 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4aa0a295-50da-5a6e-9e1c-976797741e16', 'data_vg': 'ceph-4aa0a295-50da-5a6e-9e1c-976797741e16'})  2025-05-14 02:24:31.078591 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-19540cc4-3279-5090-817a-02eeffb19a16', 'data_vg': 'ceph-19540cc4-3279-5090-817a-02eeffb19a16'})  2025-05-14 02:24:31.079279 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:31.080450 | orchestrator | 2025-05-14 02:24:31.080715 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-14 02:24:31.081724 | orchestrator | Wednesday 14 May 2025 02:24:31 +0000 (0:00:00.164) 0:01:12.298 ********* 2025-05-14 02:24:31.261025 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4aa0a295-50da-5a6e-9e1c-976797741e16', 'data_vg': 'ceph-4aa0a295-50da-5a6e-9e1c-976797741e16'})  2025-05-14 02:24:31.262115 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-19540cc4-3279-5090-817a-02eeffb19a16', 'data_vg': 'ceph-19540cc4-3279-5090-817a-02eeffb19a16'})  2025-05-14 02:24:31.262677 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:31.263607 | orchestrator | 2025-05-14 02:24:31.264417 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-14 02:24:31.265096 | orchestrator | Wednesday 14 May 2025 02:24:31 +0000 (0:00:00.183) 0:01:12.481 ********* 2025-05-14 02:24:31.435669 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4aa0a295-50da-5a6e-9e1c-976797741e16', 'data_vg': 'ceph-4aa0a295-50da-5a6e-9e1c-976797741e16'})  2025-05-14 02:24:31.440592 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-19540cc4-3279-5090-817a-02eeffb19a16', 'data_vg': 'ceph-19540cc4-3279-5090-817a-02eeffb19a16'})  2025-05-14 02:24:31.441315 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:31.442769 | orchestrator | 2025-05-14 02:24:31.445080 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-14 02:24:31.445812 | orchestrator | Wednesday 14 May 2025 02:24:31 +0000 (0:00:00.167) 0:01:12.648 ********* 2025-05-14 02:24:32.152252 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:24:32.153214 | orchestrator | 2025-05-14 02:24:32.154186 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-14 02:24:32.155589 | orchestrator | Wednesday 14 May 2025 02:24:32 +0000 (0:00:00.722) 0:01:13.371 ********* 2025-05-14 02:24:32.687253 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:24:32.687877 | orchestrator | 2025-05-14 02:24:32.688126 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-14 02:24:32.689154 | orchestrator | Wednesday 14 May 2025 02:24:32 +0000 (0:00:00.536) 0:01:13.907 ********* 2025-05-14 02:24:32.868296 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:24:32.869981 | orchestrator | 2025-05-14 02:24:32.870278 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-14 02:24:32.872082 | orchestrator | Wednesday 14 May 2025 02:24:32 +0000 (0:00:00.179) 0:01:14.087 ********* 2025-05-14 02:24:33.080218 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-19540cc4-3279-5090-817a-02eeffb19a16', 'vg_name': 'ceph-19540cc4-3279-5090-817a-02eeffb19a16'}) 2025-05-14 02:24:33.080331 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-4aa0a295-50da-5a6e-9e1c-976797741e16', 'vg_name': 'ceph-4aa0a295-50da-5a6e-9e1c-976797741e16'}) 2025-05-14 02:24:33.081034 | orchestrator | 2025-05-14 02:24:33.081973 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-14 02:24:33.083017 | orchestrator | Wednesday 14 May 2025 02:24:33 +0000 (0:00:00.212) 0:01:14.300 ********* 2025-05-14 02:24:33.255664 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4aa0a295-50da-5a6e-9e1c-976797741e16', 'data_vg': 'ceph-4aa0a295-50da-5a6e-9e1c-976797741e16'})  2025-05-14 02:24:33.256872 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-19540cc4-3279-5090-817a-02eeffb19a16', 'data_vg': 'ceph-19540cc4-3279-5090-817a-02eeffb19a16'})  2025-05-14 02:24:33.258381 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:33.259314 | orchestrator | 2025-05-14 02:24:33.260312 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-14 02:24:33.261159 | orchestrator | Wednesday 14 May 2025 02:24:33 +0000 (0:00:00.174) 0:01:14.474 ********* 2025-05-14 02:24:33.445120 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4aa0a295-50da-5a6e-9e1c-976797741e16', 'data_vg': 'ceph-4aa0a295-50da-5a6e-9e1c-976797741e16'})  2025-05-14 02:24:33.446207 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-19540cc4-3279-5090-817a-02eeffb19a16', 'data_vg': 'ceph-19540cc4-3279-5090-817a-02eeffb19a16'})  2025-05-14 02:24:33.447810 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:33.450177 | orchestrator | 2025-05-14 02:24:33.450333 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-14 02:24:33.451000 | orchestrator | Wednesday 14 May 2025 02:24:33 +0000 (0:00:00.183) 0:01:14.658 ********* 2025-05-14 02:24:33.608233 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4aa0a295-50da-5a6e-9e1c-976797741e16', 'data_vg': 'ceph-4aa0a295-50da-5a6e-9e1c-976797741e16'})  2025-05-14 02:24:33.608848 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-19540cc4-3279-5090-817a-02eeffb19a16', 'data_vg': 'ceph-19540cc4-3279-5090-817a-02eeffb19a16'})  2025-05-14 02:24:33.609424 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:33.610749 | orchestrator | 2025-05-14 02:24:33.610783 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-14 02:24:33.611115 | orchestrator | Wednesday 14 May 2025 02:24:33 +0000 (0:00:00.170) 0:01:14.829 ********* 2025-05-14 02:24:34.118307 | orchestrator | ok: [testbed-node-5] => { 2025-05-14 02:24:34.118529 | orchestrator |  "lvm_report": { 2025-05-14 02:24:34.118762 | orchestrator |  "lv": [ 2025-05-14 02:24:34.119363 | orchestrator |  { 2025-05-14 02:24:34.120204 | orchestrator |  "lv_name": "osd-block-19540cc4-3279-5090-817a-02eeffb19a16", 2025-05-14 02:24:34.120719 | orchestrator |  "vg_name": "ceph-19540cc4-3279-5090-817a-02eeffb19a16" 2025-05-14 02:24:34.121201 | orchestrator |  }, 2025-05-14 02:24:34.121484 | orchestrator |  { 2025-05-14 02:24:34.122385 | orchestrator |  "lv_name": "osd-block-4aa0a295-50da-5a6e-9e1c-976797741e16", 2025-05-14 02:24:34.122643 | orchestrator |  "vg_name": "ceph-4aa0a295-50da-5a6e-9e1c-976797741e16" 2025-05-14 02:24:34.123209 | orchestrator |  } 2025-05-14 02:24:34.123649 | orchestrator |  ], 2025-05-14 02:24:34.124306 | orchestrator |  "pv": [ 2025-05-14 02:24:34.124789 | orchestrator |  { 2025-05-14 02:24:34.125573 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-14 02:24:34.126422 | orchestrator |  "vg_name": "ceph-4aa0a295-50da-5a6e-9e1c-976797741e16" 2025-05-14 02:24:34.126934 | orchestrator |  }, 2025-05-14 02:24:34.127543 | orchestrator |  { 2025-05-14 02:24:34.128208 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-14 02:24:34.128655 | orchestrator |  "vg_name": "ceph-19540cc4-3279-5090-817a-02eeffb19a16" 2025-05-14 02:24:34.129350 | orchestrator |  } 2025-05-14 02:24:34.129607 | orchestrator |  ] 2025-05-14 02:24:34.130419 | orchestrator |  } 2025-05-14 02:24:34.130979 | orchestrator | } 2025-05-14 02:24:34.131714 | orchestrator | 2025-05-14 02:24:34.132830 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:24:34.132893 | orchestrator | 2025-05-14 02:24:34 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 02:24:34.132932 | orchestrator | 2025-05-14 02:24:34 | INFO  | Please wait and do not abort execution. 2025-05-14 02:24:34.133376 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-14 02:24:34.134163 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-14 02:24:34.134497 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-14 02:24:34.135429 | orchestrator | 2025-05-14 02:24:34.136297 | orchestrator | 2025-05-14 02:24:34.136815 | orchestrator | 2025-05-14 02:24:34.137256 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:24:34.137872 | orchestrator | Wednesday 14 May 2025 02:24:34 +0000 (0:00:00.509) 0:01:15.338 ********* 2025-05-14 02:24:34.138555 | orchestrator | =============================================================================== 2025-05-14 02:24:34.139030 | orchestrator | Create block VGs -------------------------------------------------------- 5.91s 2025-05-14 02:24:34.139548 | orchestrator | Create block LVs -------------------------------------------------------- 4.02s 2025-05-14 02:24:34.140361 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 2.28s 2025-05-14 02:24:34.140914 | orchestrator | Print LVM report data --------------------------------------------------- 2.11s 2025-05-14 02:24:34.141383 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.77s 2025-05-14 02:24:34.141849 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.66s 2025-05-14 02:24:34.142844 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.62s 2025-05-14 02:24:34.143254 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.56s 2025-05-14 02:24:34.143793 | orchestrator | Add known links to the list of available block devices ------------------ 1.54s 2025-05-14 02:24:34.144357 | orchestrator | Add known partitions to the list of available block devices ------------- 1.47s 2025-05-14 02:24:34.144885 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.00s 2025-05-14 02:24:34.145355 | orchestrator | Create list of VG/LV names ---------------------------------------------- 0.77s 2025-05-14 02:24:34.145832 | orchestrator | Add known partitions to the list of available block devices ------------- 0.76s 2025-05-14 02:24:34.146210 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.75s 2025-05-14 02:24:34.146760 | orchestrator | Add known links to the list of available block devices ------------------ 0.71s 2025-05-14 02:24:34.147218 | orchestrator | Add known links to the list of available block devices ------------------ 0.71s 2025-05-14 02:24:34.147727 | orchestrator | Print 'Create DB+WAL VGs' ----------------------------------------------- 0.71s 2025-05-14 02:24:34.148262 | orchestrator | Get initial list of available block devices ----------------------------- 0.70s 2025-05-14 02:24:34.148633 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.70s 2025-05-14 02:24:34.149186 | orchestrator | Print 'Create WAL VGs' -------------------------------------------------- 0.68s 2025-05-14 02:24:35.858179 | orchestrator | 2025-05-14 02:24:35 | INFO  | Task 74fad315-6775-46d1-8ff5-aa211bd2e527 (facts) was prepared for execution. 2025-05-14 02:24:35.858277 | orchestrator | 2025-05-14 02:24:35 | INFO  | It takes a moment until task 74fad315-6775-46d1-8ff5-aa211bd2e527 (facts) has been started and output is visible here. 2025-05-14 02:24:38.859011 | orchestrator | 2025-05-14 02:24:38.859184 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-14 02:24:38.860244 | orchestrator | 2025-05-14 02:24:38.860647 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-14 02:24:38.861021 | orchestrator | Wednesday 14 May 2025 02:24:38 +0000 (0:00:00.195) 0:00:00.195 ********* 2025-05-14 02:24:39.477105 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:24:39.977443 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:24:39.977554 | orchestrator | ok: [testbed-manager] 2025-05-14 02:24:39.977577 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:24:39.977590 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:24:39.977782 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:24:39.978238 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:24:39.979030 | orchestrator | 2025-05-14 02:24:39.980131 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-14 02:24:39.980667 | orchestrator | Wednesday 14 May 2025 02:24:39 +0000 (0:00:01.114) 0:00:01.310 ********* 2025-05-14 02:24:40.134229 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:24:40.211340 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:24:40.324061 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:24:40.398177 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:24:40.468234 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:24:41.135846 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:24:41.139063 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:41.139103 | orchestrator | 2025-05-14 02:24:41.139118 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-14 02:24:41.139130 | orchestrator | 2025-05-14 02:24:41.139142 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-14 02:24:41.139153 | orchestrator | Wednesday 14 May 2025 02:24:41 +0000 (0:00:01.163) 0:00:02.474 ********* 2025-05-14 02:24:45.905253 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:24:45.905423 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:24:45.906341 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:24:45.906375 | orchestrator | ok: [testbed-manager] 2025-05-14 02:24:45.911479 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:24:45.912073 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:24:45.912259 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:24:45.912956 | orchestrator | 2025-05-14 02:24:45.913494 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-14 02:24:45.914144 | orchestrator | 2025-05-14 02:24:45.914721 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-14 02:24:45.915217 | orchestrator | Wednesday 14 May 2025 02:24:45 +0000 (0:00:04.769) 0:00:07.244 ********* 2025-05-14 02:24:46.176506 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:24:46.243619 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:24:46.316208 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:24:46.386274 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:24:46.455788 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:24:46.491410 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:24:46.491658 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:24:46.493211 | orchestrator | 2025-05-14 02:24:46.493651 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:24:46.493910 | orchestrator | 2025-05-14 02:24:46 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-14 02:24:46.493936 | orchestrator | 2025-05-14 02:24:46 | INFO  | Please wait and do not abort execution. 2025-05-14 02:24:46.494475 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:24:46.494925 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:24:46.495537 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:24:46.495885 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:24:46.496211 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:24:46.496517 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:24:46.496994 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:24:46.497300 | orchestrator | 2025-05-14 02:24:46.497778 | orchestrator | Wednesday 14 May 2025 02:24:46 +0000 (0:00:00.588) 0:00:07.832 ********* 2025-05-14 02:24:46.498075 | orchestrator | =============================================================================== 2025-05-14 02:24:46.498253 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.77s 2025-05-14 02:24:46.498537 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.16s 2025-05-14 02:24:46.498883 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.11s 2025-05-14 02:24:46.499269 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.59s 2025-05-14 02:24:46.906653 | orchestrator | 2025-05-14 02:24:46.910901 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Wed May 14 02:24:46 UTC 2025 2025-05-14 02:24:46.910956 | orchestrator | 2025-05-14 02:24:48.440022 | orchestrator | 2025-05-14 02:24:48 | INFO  | Collection nutshell is prepared for execution 2025-05-14 02:24:48.440139 | orchestrator | 2025-05-14 02:24:48 | INFO  | D [0] - dotfiles 2025-05-14 02:24:48.444510 | orchestrator | 2025-05-14 02:24:48 | INFO  | D [0] - homer 2025-05-14 02:24:48.444587 | orchestrator | 2025-05-14 02:24:48 | INFO  | D [0] - netdata 2025-05-14 02:24:48.444602 | orchestrator | 2025-05-14 02:24:48 | INFO  | D [0] - openstackclient 2025-05-14 02:24:48.444614 | orchestrator | 2025-05-14 02:24:48 | INFO  | D [0] - phpmyadmin 2025-05-14 02:24:48.444625 | orchestrator | 2025-05-14 02:24:48 | INFO  | A [0] - common 2025-05-14 02:24:48.446012 | orchestrator | 2025-05-14 02:24:48 | INFO  | A [1] -- loadbalancer 2025-05-14 02:24:48.446086 | orchestrator | 2025-05-14 02:24:48 | INFO  | D [2] --- opensearch 2025-05-14 02:24:48.446098 | orchestrator | 2025-05-14 02:24:48 | INFO  | A [2] --- mariadb-ng 2025-05-14 02:24:48.446108 | orchestrator | 2025-05-14 02:24:48 | INFO  | D [3] ---- horizon 2025-05-14 02:24:48.446119 | orchestrator | 2025-05-14 02:24:48 | INFO  | A [3] ---- keystone 2025-05-14 02:24:48.446130 | orchestrator | 2025-05-14 02:24:48 | INFO  | A [4] ----- neutron 2025-05-14 02:24:48.446141 | orchestrator | 2025-05-14 02:24:48 | INFO  | D [5] ------ wait-for-nova 2025-05-14 02:24:48.446153 | orchestrator | 2025-05-14 02:24:48 | INFO  | A [5] ------ octavia 2025-05-14 02:24:48.446550 | orchestrator | 2025-05-14 02:24:48 | INFO  | D [4] ----- barbican 2025-05-14 02:24:48.446575 | orchestrator | 2025-05-14 02:24:48 | INFO  | D [4] ----- designate 2025-05-14 02:24:48.446910 | orchestrator | 2025-05-14 02:24:48 | INFO  | D [4] ----- ironic 2025-05-14 02:24:48.446971 | orchestrator | 2025-05-14 02:24:48 | INFO  | D [4] ----- placement 2025-05-14 02:24:48.446985 | orchestrator | 2025-05-14 02:24:48 | INFO  | D [4] ----- magnum 2025-05-14 02:24:48.447049 | orchestrator | 2025-05-14 02:24:48 | INFO  | A [1] -- openvswitch 2025-05-14 02:24:48.447062 | orchestrator | 2025-05-14 02:24:48 | INFO  | D [2] --- ovn 2025-05-14 02:24:48.447367 | orchestrator | 2025-05-14 02:24:48 | INFO  | D [1] -- memcached 2025-05-14 02:24:48.447390 | orchestrator | 2025-05-14 02:24:48 | INFO  | D [1] -- redis 2025-05-14 02:24:48.447401 | orchestrator | 2025-05-14 02:24:48 | INFO  | D [1] -- rabbitmq-ng 2025-05-14 02:24:48.447412 | orchestrator | 2025-05-14 02:24:48 | INFO  | A [0] - kubernetes 2025-05-14 02:24:48.447656 | orchestrator | 2025-05-14 02:24:48 | INFO  | D [1] -- kubeconfig 2025-05-14 02:24:48.447680 | orchestrator | 2025-05-14 02:24:48 | INFO  | A [1] -- copy-kubeconfig 2025-05-14 02:24:48.447692 | orchestrator | 2025-05-14 02:24:48 | INFO  | A [0] - ceph 2025-05-14 02:24:48.450914 | orchestrator | 2025-05-14 02:24:48 | INFO  | A [1] -- ceph-pools 2025-05-14 02:24:48.450945 | orchestrator | 2025-05-14 02:24:48 | INFO  | A [2] --- copy-ceph-keys 2025-05-14 02:24:48.450957 | orchestrator | 2025-05-14 02:24:48 | INFO  | A [3] ---- cephclient 2025-05-14 02:24:48.450968 | orchestrator | 2025-05-14 02:24:48 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-05-14 02:24:48.450979 | orchestrator | 2025-05-14 02:24:48 | INFO  | A [4] ----- wait-for-keystone 2025-05-14 02:24:48.450990 | orchestrator | 2025-05-14 02:24:48 | INFO  | D [5] ------ kolla-ceph-rgw 2025-05-14 02:24:48.451027 | orchestrator | 2025-05-14 02:24:48 | INFO  | D [5] ------ glance 2025-05-14 02:24:48.451038 | orchestrator | 2025-05-14 02:24:48 | INFO  | D [5] ------ cinder 2025-05-14 02:24:48.451049 | orchestrator | 2025-05-14 02:24:48 | INFO  | D [5] ------ nova 2025-05-14 02:24:48.451060 | orchestrator | 2025-05-14 02:24:48 | INFO  | A [4] ----- prometheus 2025-05-14 02:24:48.451071 | orchestrator | 2025-05-14 02:24:48 | INFO  | D [5] ------ grafana 2025-05-14 02:24:48.603251 | orchestrator | 2025-05-14 02:24:48 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-05-14 02:24:48.603416 | orchestrator | 2025-05-14 02:24:48 | INFO  | Tasks are running in the background 2025-05-14 02:24:50.949181 | orchestrator | 2025-05-14 02:24:50 | INFO  | No task IDs specified, wait for all currently running tasks 2025-05-14 02:24:53.044713 | orchestrator | 2025-05-14 02:24:53 | INFO  | Task f0a20150-4d14-403c-9206-3e8a3b36f42d is in state STARTED 2025-05-14 02:24:53.045137 | orchestrator | 2025-05-14 02:24:53 | INFO  | Task bdc3fd74-b8d8-4e28-bb79-d45e1512a601 is in state STARTED 2025-05-14 02:24:53.045849 | orchestrator | 2025-05-14 02:24:53 | INFO  | Task 9ac8692d-eeac-4186-8f61-0b11254f5e5d is in state STARTED 2025-05-14 02:24:53.048794 | orchestrator | 2025-05-14 02:24:53 | INFO  | Task 95351fa3-edd6-4508-876f-eb0c2d8439ba is in state STARTED 2025-05-14 02:24:53.049269 | orchestrator | 2025-05-14 02:24:53 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:24:53.049907 | orchestrator | 2025-05-14 02:24:53 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:24:53.049982 | orchestrator | 2025-05-14 02:24:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:24:56.107899 | orchestrator | 2025-05-14 02:24:56 | INFO  | Task f0a20150-4d14-403c-9206-3e8a3b36f42d is in state STARTED 2025-05-14 02:24:56.108089 | orchestrator | 2025-05-14 02:24:56 | INFO  | Task bdc3fd74-b8d8-4e28-bb79-d45e1512a601 is in state STARTED 2025-05-14 02:24:56.109473 | orchestrator | 2025-05-14 02:24:56 | INFO  | Task 9ac8692d-eeac-4186-8f61-0b11254f5e5d is in state STARTED 2025-05-14 02:24:56.109531 | orchestrator | 2025-05-14 02:24:56 | INFO  | Task 95351fa3-edd6-4508-876f-eb0c2d8439ba is in state STARTED 2025-05-14 02:24:56.109895 | orchestrator | 2025-05-14 02:24:56 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:24:56.116672 | orchestrator | 2025-05-14 02:24:56 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:24:56.116725 | orchestrator | 2025-05-14 02:24:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:24:59.179525 | orchestrator | 2025-05-14 02:24:59 | INFO  | Task f0a20150-4d14-403c-9206-3e8a3b36f42d is in state STARTED 2025-05-14 02:24:59.179631 | orchestrator | 2025-05-14 02:24:59 | INFO  | Task bdc3fd74-b8d8-4e28-bb79-d45e1512a601 is in state STARTED 2025-05-14 02:24:59.179646 | orchestrator | 2025-05-14 02:24:59 | INFO  | Task 9ac8692d-eeac-4186-8f61-0b11254f5e5d is in state STARTED 2025-05-14 02:24:59.179658 | orchestrator | 2025-05-14 02:24:59 | INFO  | Task 95351fa3-edd6-4508-876f-eb0c2d8439ba is in state STARTED 2025-05-14 02:24:59.179670 | orchestrator | 2025-05-14 02:24:59 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:24:59.179681 | orchestrator | 2025-05-14 02:24:59 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:24:59.179692 | orchestrator | 2025-05-14 02:24:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:25:02.243463 | orchestrator | 2025-05-14 02:25:02 | INFO  | Task f0a20150-4d14-403c-9206-3e8a3b36f42d is in state STARTED 2025-05-14 02:25:02.243603 | orchestrator | 2025-05-14 02:25:02 | INFO  | Task bdc3fd74-b8d8-4e28-bb79-d45e1512a601 is in state STARTED 2025-05-14 02:25:02.243619 | orchestrator | 2025-05-14 02:25:02 | INFO  | Task 9ac8692d-eeac-4186-8f61-0b11254f5e5d is in state STARTED 2025-05-14 02:25:02.247858 | orchestrator | 2025-05-14 02:25:02 | INFO  | Task 95351fa3-edd6-4508-876f-eb0c2d8439ba is in state STARTED 2025-05-14 02:25:02.247898 | orchestrator | 2025-05-14 02:25:02 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:25:02.247909 | orchestrator | 2025-05-14 02:25:02 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:25:02.247921 | orchestrator | 2025-05-14 02:25:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:25:05.288369 | orchestrator | 2025-05-14 02:25:05 | INFO  | Task f0a20150-4d14-403c-9206-3e8a3b36f42d is in state STARTED 2025-05-14 02:25:05.288480 | orchestrator | 2025-05-14 02:25:05 | INFO  | Task bdc3fd74-b8d8-4e28-bb79-d45e1512a601 is in state STARTED 2025-05-14 02:25:05.288577 | orchestrator | 2025-05-14 02:25:05 | INFO  | Task 9ac8692d-eeac-4186-8f61-0b11254f5e5d is in state STARTED 2025-05-14 02:25:05.290910 | orchestrator | 2025-05-14 02:25:05 | INFO  | Task 95351fa3-edd6-4508-876f-eb0c2d8439ba is in state STARTED 2025-05-14 02:25:05.290950 | orchestrator | 2025-05-14 02:25:05 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:25:05.291239 | orchestrator | 2025-05-14 02:25:05 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:25:05.291337 | orchestrator | 2025-05-14 02:25:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:25:08.346427 | orchestrator | 2025-05-14 02:25:08 | INFO  | Task f0a20150-4d14-403c-9206-3e8a3b36f42d is in state STARTED 2025-05-14 02:25:08.346537 | orchestrator | 2025-05-14 02:25:08 | INFO  | Task bdc3fd74-b8d8-4e28-bb79-d45e1512a601 is in state STARTED 2025-05-14 02:25:08.363583 | orchestrator | 2025-05-14 02:25:08 | INFO  | Task 9ac8692d-eeac-4186-8f61-0b11254f5e5d is in state STARTED 2025-05-14 02:25:08.363666 | orchestrator | 2025-05-14 02:25:08 | INFO  | Task 95351fa3-edd6-4508-876f-eb0c2d8439ba is in state STARTED 2025-05-14 02:25:08.363679 | orchestrator | 2025-05-14 02:25:08 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:25:08.363690 | orchestrator | 2025-05-14 02:25:08 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:25:08.363702 | orchestrator | 2025-05-14 02:25:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:25:11.414556 | orchestrator | 2025-05-14 02:25:11 | INFO  | Task f0a20150-4d14-403c-9206-3e8a3b36f42d is in state STARTED 2025-05-14 02:25:11.415173 | orchestrator | 2025-05-14 02:25:11 | INFO  | Task bdc3fd74-b8d8-4e28-bb79-d45e1512a601 is in state STARTED 2025-05-14 02:25:11.419810 | orchestrator | 2025-05-14 02:25:11 | INFO  | Task 9ac8692d-eeac-4186-8f61-0b11254f5e5d is in state STARTED 2025-05-14 02:25:11.421648 | orchestrator | 2025-05-14 02:25:11.421701 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-05-14 02:25:11.421714 | orchestrator | 2025-05-14 02:25:11.421726 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-05-14 02:25:11.421737 | orchestrator | Wednesday 14 May 2025 02:24:58 +0000 (0:00:00.399) 0:00:00.399 ********* 2025-05-14 02:25:11.421777 | orchestrator | changed: [testbed-manager] 2025-05-14 02:25:11.421797 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:25:11.421815 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:25:11.421832 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:25:11.421877 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:25:11.421895 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:25:11.421914 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:25:11.421933 | orchestrator | 2025-05-14 02:25:11.421952 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-05-14 02:25:11.421971 | orchestrator | Wednesday 14 May 2025 02:25:01 +0000 (0:00:03.586) 0:00:03.986 ********* 2025-05-14 02:25:11.421990 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-05-14 02:25:11.422010 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-05-14 02:25:11.422071 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-05-14 02:25:11.422082 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-05-14 02:25:11.422093 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-05-14 02:25:11.422104 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-05-14 02:25:11.422114 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-05-14 02:25:11.422125 | orchestrator | 2025-05-14 02:25:11.422136 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-05-14 02:25:11.422146 | orchestrator | Wednesday 14 May 2025 02:25:03 +0000 (0:00:02.297) 0:00:06.283 ********* 2025-05-14 02:25:11.422162 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-14 02:25:02.389818', 'end': '2025-05-14 02:25:02.392811', 'delta': '0:00:00.002993', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-14 02:25:11.422192 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-14 02:25:02.323771', 'end': '2025-05-14 02:25:02.332366', 'delta': '0:00:00.008595', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-14 02:25:11.422204 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-14 02:25:02.712728', 'end': '2025-05-14 02:25:02.717300', 'delta': '0:00:00.004572', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-14 02:25:11.422243 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-14 02:25:03.191273', 'end': '2025-05-14 02:25:03.201044', 'delta': '0:00:00.009771', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-14 02:25:11.422268 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-14 02:25:03.362446', 'end': '2025-05-14 02:25:03.370248', 'delta': '0:00:00.007802', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-14 02:25:11.422282 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-14 02:25:03.486399', 'end': '2025-05-14 02:25:03.494125', 'delta': '0:00:00.007726', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-14 02:25:11.422300 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-14 02:25:03.676457', 'end': '2025-05-14 02:25:03.683462', 'delta': '0:00:00.007005', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-14 02:25:11.422314 | orchestrator | 2025-05-14 02:25:11.422326 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-05-14 02:25:11.422339 | orchestrator | Wednesday 14 May 2025 02:25:06 +0000 (0:00:02.632) 0:00:08.916 ********* 2025-05-14 02:25:11.422351 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-05-14 02:25:11.422362 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-05-14 02:25:11.422373 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-05-14 02:25:11.422384 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-05-14 02:25:11.422394 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-05-14 02:25:11.422405 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-05-14 02:25:11.422415 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-05-14 02:25:11.422426 | orchestrator | 2025-05-14 02:25:11.422445 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:25:11.422457 | orchestrator | testbed-manager : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:25:11.422469 | orchestrator | testbed-node-0 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:25:11.422480 | orchestrator | testbed-node-1 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:25:11.422496 | orchestrator | testbed-node-2 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:25:11.422508 | orchestrator | testbed-node-3 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:25:11.422519 | orchestrator | testbed-node-4 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:25:11.422530 | orchestrator | testbed-node-5 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:25:11.422541 | orchestrator | 2025-05-14 02:25:11.422553 | orchestrator | Wednesday 14 May 2025 02:25:09 +0000 (0:00:02.954) 0:00:11.870 ********* 2025-05-14 02:25:11.422571 | orchestrator | =============================================================================== 2025-05-14 02:25:11.422589 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.59s 2025-05-14 02:25:11.422611 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.95s 2025-05-14 02:25:11.422638 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.63s 2025-05-14 02:25:11.422655 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.30s 2025-05-14 02:25:11.422711 | orchestrator | 2025-05-14 02:25:11 | INFO  | Task 95351fa3-edd6-4508-876f-eb0c2d8439ba is in state SUCCESS 2025-05-14 02:25:11.423124 | orchestrator | 2025-05-14 02:25:11 | INFO  | Task 7c051209-7d48-4378-86d2-ae0c41e4fa7c is in state STARTED 2025-05-14 02:25:11.426617 | orchestrator | 2025-05-14 02:25:11 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:25:11.427988 | orchestrator | 2025-05-14 02:25:11 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:25:11.431728 | orchestrator | 2025-05-14 02:25:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:25:14.486262 | orchestrator | 2025-05-14 02:25:14 | INFO  | Task f0a20150-4d14-403c-9206-3e8a3b36f42d is in state STARTED 2025-05-14 02:25:14.486335 | orchestrator | 2025-05-14 02:25:14 | INFO  | Task bdc3fd74-b8d8-4e28-bb79-d45e1512a601 is in state STARTED 2025-05-14 02:25:14.486370 | orchestrator | 2025-05-14 02:25:14 | INFO  | Task 9ac8692d-eeac-4186-8f61-0b11254f5e5d is in state STARTED 2025-05-14 02:25:14.489225 | orchestrator | 2025-05-14 02:25:14 | INFO  | Task 7c051209-7d48-4378-86d2-ae0c41e4fa7c is in state STARTED 2025-05-14 02:25:14.489891 | orchestrator | 2025-05-14 02:25:14 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:25:14.489920 | orchestrator | 2025-05-14 02:25:14 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:25:14.489929 | orchestrator | 2025-05-14 02:25:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:25:17.538544 | orchestrator | 2025-05-14 02:25:17 | INFO  | Task f0a20150-4d14-403c-9206-3e8a3b36f42d is in state STARTED 2025-05-14 02:25:17.540289 | orchestrator | 2025-05-14 02:25:17 | INFO  | Task bdc3fd74-b8d8-4e28-bb79-d45e1512a601 is in state STARTED 2025-05-14 02:25:17.542370 | orchestrator | 2025-05-14 02:25:17 | INFO  | Task 9ac8692d-eeac-4186-8f61-0b11254f5e5d is in state STARTED 2025-05-14 02:25:17.542435 | orchestrator | 2025-05-14 02:25:17 | INFO  | Task 7c051209-7d48-4378-86d2-ae0c41e4fa7c is in state STARTED 2025-05-14 02:25:17.543224 | orchestrator | 2025-05-14 02:25:17 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:25:17.548296 | orchestrator | 2025-05-14 02:25:17 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:25:17.548351 | orchestrator | 2025-05-14 02:25:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:25:20.609043 | orchestrator | 2025-05-14 02:25:20 | INFO  | Task f0a20150-4d14-403c-9206-3e8a3b36f42d is in state STARTED 2025-05-14 02:25:20.613202 | orchestrator | 2025-05-14 02:25:20 | INFO  | Task bdc3fd74-b8d8-4e28-bb79-d45e1512a601 is in state STARTED 2025-05-14 02:25:20.613263 | orchestrator | 2025-05-14 02:25:20 | INFO  | Task 9ac8692d-eeac-4186-8f61-0b11254f5e5d is in state STARTED 2025-05-14 02:25:20.614098 | orchestrator | 2025-05-14 02:25:20 | INFO  | Task 7c051209-7d48-4378-86d2-ae0c41e4fa7c is in state STARTED 2025-05-14 02:25:20.615741 | orchestrator | 2025-05-14 02:25:20 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:25:20.616344 | orchestrator | 2025-05-14 02:25:20 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:25:20.616435 | orchestrator | 2025-05-14 02:25:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:25:23.712068 | orchestrator | 2025-05-14 02:25:23 | INFO  | Task f0a20150-4d14-403c-9206-3e8a3b36f42d is in state STARTED 2025-05-14 02:25:23.712687 | orchestrator | 2025-05-14 02:25:23 | INFO  | Task bdc3fd74-b8d8-4e28-bb79-d45e1512a601 is in state STARTED 2025-05-14 02:25:23.717300 | orchestrator | 2025-05-14 02:25:23 | INFO  | Task 9ac8692d-eeac-4186-8f61-0b11254f5e5d is in state STARTED 2025-05-14 02:25:23.720120 | orchestrator | 2025-05-14 02:25:23 | INFO  | Task 7c051209-7d48-4378-86d2-ae0c41e4fa7c is in state STARTED 2025-05-14 02:25:23.724027 | orchestrator | 2025-05-14 02:25:23 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:25:23.729893 | orchestrator | 2025-05-14 02:25:23 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:25:23.729967 | orchestrator | 2025-05-14 02:25:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:25:26.787660 | orchestrator | 2025-05-14 02:25:26 | INFO  | Task f0a20150-4d14-403c-9206-3e8a3b36f42d is in state STARTED 2025-05-14 02:25:26.787799 | orchestrator | 2025-05-14 02:25:26 | INFO  | Task bdc3fd74-b8d8-4e28-bb79-d45e1512a601 is in state STARTED 2025-05-14 02:25:26.790783 | orchestrator | 2025-05-14 02:25:26 | INFO  | Task 9ac8692d-eeac-4186-8f61-0b11254f5e5d is in state STARTED 2025-05-14 02:25:26.790841 | orchestrator | 2025-05-14 02:25:26 | INFO  | Task 7c051209-7d48-4378-86d2-ae0c41e4fa7c is in state STARTED 2025-05-14 02:25:26.790847 | orchestrator | 2025-05-14 02:25:26 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:25:26.793866 | orchestrator | 2025-05-14 02:25:26 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:25:26.793928 | orchestrator | 2025-05-14 02:25:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:25:29.849313 | orchestrator | 2025-05-14 02:25:29 | INFO  | Task f0a20150-4d14-403c-9206-3e8a3b36f42d is in state STARTED 2025-05-14 02:25:29.851629 | orchestrator | 2025-05-14 02:25:29 | INFO  | Task bdc3fd74-b8d8-4e28-bb79-d45e1512a601 is in state STARTED 2025-05-14 02:25:29.854174 | orchestrator | 2025-05-14 02:25:29 | INFO  | Task 9ac8692d-eeac-4186-8f61-0b11254f5e5d is in state STARTED 2025-05-14 02:25:29.857701 | orchestrator | 2025-05-14 02:25:29 | INFO  | Task 7c051209-7d48-4378-86d2-ae0c41e4fa7c is in state STARTED 2025-05-14 02:25:29.860739 | orchestrator | 2025-05-14 02:25:29 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:25:29.863469 | orchestrator | 2025-05-14 02:25:29 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:25:29.863507 | orchestrator | 2025-05-14 02:25:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:25:32.934369 | orchestrator | 2025-05-14 02:25:32 | INFO  | Task f0a20150-4d14-403c-9206-3e8a3b36f42d is in state STARTED 2025-05-14 02:25:32.934446 | orchestrator | 2025-05-14 02:25:32 | INFO  | Task bdc3fd74-b8d8-4e28-bb79-d45e1512a601 is in state SUCCESS 2025-05-14 02:25:32.939042 | orchestrator | 2025-05-14 02:25:32 | INFO  | Task 9ac8692d-eeac-4186-8f61-0b11254f5e5d is in state STARTED 2025-05-14 02:25:32.943360 | orchestrator | 2025-05-14 02:25:32 | INFO  | Task 7c051209-7d48-4378-86d2-ae0c41e4fa7c is in state STARTED 2025-05-14 02:25:32.946002 | orchestrator | 2025-05-14 02:25:32 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:25:32.948694 | orchestrator | 2025-05-14 02:25:32 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:25:32.948748 | orchestrator | 2025-05-14 02:25:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:25:36.013305 | orchestrator | 2025-05-14 02:25:36 | INFO  | Task f0a20150-4d14-403c-9206-3e8a3b36f42d is in state STARTED 2025-05-14 02:25:36.014187 | orchestrator | 2025-05-14 02:25:36 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:25:36.020350 | orchestrator | 2025-05-14 02:25:36 | INFO  | Task 9ac8692d-eeac-4186-8f61-0b11254f5e5d is in state STARTED 2025-05-14 02:25:36.024854 | orchestrator | 2025-05-14 02:25:36 | INFO  | Task 7c051209-7d48-4378-86d2-ae0c41e4fa7c is in state STARTED 2025-05-14 02:25:36.027710 | orchestrator | 2025-05-14 02:25:36 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:25:36.030245 | orchestrator | 2025-05-14 02:25:36 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:25:36.030298 | orchestrator | 2025-05-14 02:25:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:25:39.100935 | orchestrator | 2025-05-14 02:25:39 | INFO  | Task f0a20150-4d14-403c-9206-3e8a3b36f42d is in state STARTED 2025-05-14 02:25:39.105038 | orchestrator | 2025-05-14 02:25:39 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:25:39.108587 | orchestrator | 2025-05-14 02:25:39 | INFO  | Task 9ac8692d-eeac-4186-8f61-0b11254f5e5d is in state STARTED 2025-05-14 02:25:39.112018 | orchestrator | 2025-05-14 02:25:39 | INFO  | Task 7c051209-7d48-4378-86d2-ae0c41e4fa7c is in state STARTED 2025-05-14 02:25:39.112968 | orchestrator | 2025-05-14 02:25:39 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:25:39.116089 | orchestrator | 2025-05-14 02:25:39 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:25:39.116302 | orchestrator | 2025-05-14 02:25:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:25:42.170135 | orchestrator | 2025-05-14 02:25:42 | INFO  | Task f0a20150-4d14-403c-9206-3e8a3b36f42d is in state STARTED 2025-05-14 02:25:42.170692 | orchestrator | 2025-05-14 02:25:42 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:25:42.173636 | orchestrator | 2025-05-14 02:25:42 | INFO  | Task 9ac8692d-eeac-4186-8f61-0b11254f5e5d is in state STARTED 2025-05-14 02:25:42.173676 | orchestrator | 2025-05-14 02:25:42 | INFO  | Task 7c051209-7d48-4378-86d2-ae0c41e4fa7c is in state STARTED 2025-05-14 02:25:42.173681 | orchestrator | 2025-05-14 02:25:42 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:25:42.177969 | orchestrator | 2025-05-14 02:25:42 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:25:42.178001 | orchestrator | 2025-05-14 02:25:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:25:45.213869 | orchestrator | 2025-05-14 02:25:45 | INFO  | Task f0a20150-4d14-403c-9206-3e8a3b36f42d is in state STARTED 2025-05-14 02:25:45.215217 | orchestrator | 2025-05-14 02:25:45 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:25:45.215647 | orchestrator | 2025-05-14 02:25:45 | INFO  | Task 9ac8692d-eeac-4186-8f61-0b11254f5e5d is in state STARTED 2025-05-14 02:25:45.216458 | orchestrator | 2025-05-14 02:25:45 | INFO  | Task 7c051209-7d48-4378-86d2-ae0c41e4fa7c is in state STARTED 2025-05-14 02:25:45.217432 | orchestrator | 2025-05-14 02:25:45 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:25:45.220267 | orchestrator | 2025-05-14 02:25:45 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:25:45.220300 | orchestrator | 2025-05-14 02:25:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:25:48.267425 | orchestrator | 2025-05-14 02:25:48 | INFO  | Task f0a20150-4d14-403c-9206-3e8a3b36f42d is in state STARTED 2025-05-14 02:25:48.267567 | orchestrator | 2025-05-14 02:25:48 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:25:48.268099 | orchestrator | 2025-05-14 02:25:48 | INFO  | Task 9ac8692d-eeac-4186-8f61-0b11254f5e5d is in state STARTED 2025-05-14 02:25:48.268471 | orchestrator | 2025-05-14 02:25:48 | INFO  | Task 7c051209-7d48-4378-86d2-ae0c41e4fa7c is in state STARTED 2025-05-14 02:25:48.268999 | orchestrator | 2025-05-14 02:25:48 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:25:48.269517 | orchestrator | 2025-05-14 02:25:48 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:25:48.269700 | orchestrator | 2025-05-14 02:25:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:25:51.330492 | orchestrator | 2025-05-14 02:25:51 | INFO  | Task f0a20150-4d14-403c-9206-3e8a3b36f42d is in state STARTED 2025-05-14 02:25:51.330637 | orchestrator | 2025-05-14 02:25:51 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:25:51.331896 | orchestrator | 2025-05-14 02:25:51 | INFO  | Task 9ac8692d-eeac-4186-8f61-0b11254f5e5d is in state STARTED 2025-05-14 02:25:51.331922 | orchestrator | 2025-05-14 02:25:51 | INFO  | Task 7c051209-7d48-4378-86d2-ae0c41e4fa7c is in state STARTED 2025-05-14 02:25:51.333107 | orchestrator | 2025-05-14 02:25:51 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:25:51.333500 | orchestrator | 2025-05-14 02:25:51 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:25:51.333593 | orchestrator | 2025-05-14 02:25:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:25:54.417548 | orchestrator | 2025-05-14 02:25:54 | INFO  | Task f0a20150-4d14-403c-9206-3e8a3b36f42d is in state STARTED 2025-05-14 02:25:54.417618 | orchestrator | 2025-05-14 02:25:54 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:25:54.417624 | orchestrator | 2025-05-14 02:25:54 | INFO  | Task 9ac8692d-eeac-4186-8f61-0b11254f5e5d is in state STARTED 2025-05-14 02:25:54.417648 | orchestrator | 2025-05-14 02:25:54 | INFO  | Task 7c051209-7d48-4378-86d2-ae0c41e4fa7c is in state STARTED 2025-05-14 02:25:54.420612 | orchestrator | 2025-05-14 02:25:54 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:25:54.427576 | orchestrator | 2025-05-14 02:25:54 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:25:54.427617 | orchestrator | 2025-05-14 02:25:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:25:57.473402 | orchestrator | 2025-05-14 02:25:57 | INFO  | Task f0a20150-4d14-403c-9206-3e8a3b36f42d is in state SUCCESS 2025-05-14 02:25:57.473577 | orchestrator | 2025-05-14 02:25:57 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:25:57.473844 | orchestrator | 2025-05-14 02:25:57 | INFO  | Task 9ac8692d-eeac-4186-8f61-0b11254f5e5d is in state STARTED 2025-05-14 02:25:57.474394 | orchestrator | 2025-05-14 02:25:57 | INFO  | Task 7c051209-7d48-4378-86d2-ae0c41e4fa7c is in state STARTED 2025-05-14 02:25:57.474886 | orchestrator | 2025-05-14 02:25:57 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:25:57.476489 | orchestrator | 2025-05-14 02:25:57 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:25:57.476512 | orchestrator | 2025-05-14 02:25:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:26:00.541249 | orchestrator | 2025-05-14 02:26:00 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:26:00.544348 | orchestrator | 2025-05-14 02:26:00 | INFO  | Task 9ac8692d-eeac-4186-8f61-0b11254f5e5d is in state STARTED 2025-05-14 02:26:00.546168 | orchestrator | 2025-05-14 02:26:00 | INFO  | Task 7c051209-7d48-4378-86d2-ae0c41e4fa7c is in state STARTED 2025-05-14 02:26:00.548058 | orchestrator | 2025-05-14 02:26:00 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:26:00.549265 | orchestrator | 2025-05-14 02:26:00 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:26:00.549812 | orchestrator | 2025-05-14 02:26:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:26:03.618535 | orchestrator | 2025-05-14 02:26:03 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:26:03.624507 | orchestrator | 2025-05-14 02:26:03.624577 | orchestrator | 2025-05-14 02:26:03.624591 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-05-14 02:26:03.624604 | orchestrator | 2025-05-14 02:26:03.624615 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-05-14 02:26:03.624626 | orchestrator | Wednesday 14 May 2025 02:24:58 +0000 (0:00:00.375) 0:00:00.375 ********* 2025-05-14 02:26:03.624637 | orchestrator | ok: [testbed-manager] => { 2025-05-14 02:26:03.624650 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-05-14 02:26:03.624663 | orchestrator | } 2025-05-14 02:26:03.624674 | orchestrator | 2025-05-14 02:26:03.624685 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-05-14 02:26:03.624695 | orchestrator | Wednesday 14 May 2025 02:24:58 +0000 (0:00:00.302) 0:00:00.677 ********* 2025-05-14 02:26:03.624706 | orchestrator | ok: [testbed-manager] 2025-05-14 02:26:03.624717 | orchestrator | 2025-05-14 02:26:03.624728 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-05-14 02:26:03.624739 | orchestrator | Wednesday 14 May 2025 02:24:59 +0000 (0:00:01.216) 0:00:01.894 ********* 2025-05-14 02:26:03.624749 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-05-14 02:26:03.624760 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-05-14 02:26:03.624817 | orchestrator | 2025-05-14 02:26:03.624829 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-05-14 02:26:03.624840 | orchestrator | Wednesday 14 May 2025 02:25:00 +0000 (0:00:00.894) 0:00:02.789 ********* 2025-05-14 02:26:03.624850 | orchestrator | changed: [testbed-manager] 2025-05-14 02:26:03.624861 | orchestrator | 2025-05-14 02:26:03.624872 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-05-14 02:26:03.624882 | orchestrator | Wednesday 14 May 2025 02:25:03 +0000 (0:00:02.863) 0:00:05.652 ********* 2025-05-14 02:26:03.624893 | orchestrator | changed: [testbed-manager] 2025-05-14 02:26:03.624904 | orchestrator | 2025-05-14 02:26:03.624914 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-05-14 02:26:03.624925 | orchestrator | Wednesday 14 May 2025 02:25:04 +0000 (0:00:01.471) 0:00:07.124 ********* 2025-05-14 02:26:03.624942 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-05-14 02:26:03.624953 | orchestrator | ok: [testbed-manager] 2025-05-14 02:26:03.624964 | orchestrator | 2025-05-14 02:26:03.624975 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-05-14 02:26:03.624988 | orchestrator | Wednesday 14 May 2025 02:25:29 +0000 (0:00:24.698) 0:00:31.822 ********* 2025-05-14 02:26:03.625001 | orchestrator | changed: [testbed-manager] 2025-05-14 02:26:03.625013 | orchestrator | 2025-05-14 02:26:03.625026 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:26:03.625169 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:26:03.625186 | orchestrator | 2025-05-14 02:26:03.625200 | orchestrator | Wednesday 14 May 2025 02:25:31 +0000 (0:00:01.823) 0:00:33.645 ********* 2025-05-14 02:26:03.625211 | orchestrator | =============================================================================== 2025-05-14 02:26:03.625222 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 24.70s 2025-05-14 02:26:03.625233 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.86s 2025-05-14 02:26:03.625244 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 1.82s 2025-05-14 02:26:03.625255 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.47s 2025-05-14 02:26:03.625266 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.22s 2025-05-14 02:26:03.625276 | orchestrator | osism.services.homer : Create required directories ---------------------- 0.89s 2025-05-14 02:26:03.625287 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.30s 2025-05-14 02:26:03.625298 | orchestrator | 2025-05-14 02:26:03.625308 | orchestrator | 2025-05-14 02:26:03.625319 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-05-14 02:26:03.625330 | orchestrator | 2025-05-14 02:26:03.625340 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-05-14 02:26:03.625351 | orchestrator | Wednesday 14 May 2025 02:24:58 +0000 (0:00:00.489) 0:00:00.489 ********* 2025-05-14 02:26:03.625362 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-05-14 02:26:03.625373 | orchestrator | 2025-05-14 02:26:03.625384 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-05-14 02:26:03.625395 | orchestrator | Wednesday 14 May 2025 02:24:59 +0000 (0:00:00.801) 0:00:01.291 ********* 2025-05-14 02:26:03.625405 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-05-14 02:26:03.625416 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-05-14 02:26:03.625426 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-05-14 02:26:03.625437 | orchestrator | 2025-05-14 02:26:03.625448 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-05-14 02:26:03.625467 | orchestrator | Wednesday 14 May 2025 02:25:00 +0000 (0:00:01.399) 0:00:02.690 ********* 2025-05-14 02:26:03.625478 | orchestrator | changed: [testbed-manager] 2025-05-14 02:26:03.625488 | orchestrator | 2025-05-14 02:26:03.625499 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-05-14 02:26:03.625510 | orchestrator | Wednesday 14 May 2025 02:25:02 +0000 (0:00:01.897) 0:00:04.588 ********* 2025-05-14 02:26:03.625521 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-05-14 02:26:03.625532 | orchestrator | ok: [testbed-manager] 2025-05-14 02:26:03.625542 | orchestrator | 2025-05-14 02:26:03.625567 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-05-14 02:26:03.625579 | orchestrator | Wednesday 14 May 2025 02:25:47 +0000 (0:00:45.107) 0:00:49.695 ********* 2025-05-14 02:26:03.625590 | orchestrator | changed: [testbed-manager] 2025-05-14 02:26:03.625601 | orchestrator | 2025-05-14 02:26:03.625612 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-05-14 02:26:03.625623 | orchestrator | Wednesday 14 May 2025 02:25:49 +0000 (0:00:01.297) 0:00:50.993 ********* 2025-05-14 02:26:03.625633 | orchestrator | ok: [testbed-manager] 2025-05-14 02:26:03.625644 | orchestrator | 2025-05-14 02:26:03.625655 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-05-14 02:26:03.625753 | orchestrator | Wednesday 14 May 2025 02:25:50 +0000 (0:00:01.599) 0:00:52.593 ********* 2025-05-14 02:26:03.625783 | orchestrator | changed: [testbed-manager] 2025-05-14 02:26:03.625794 | orchestrator | 2025-05-14 02:26:03.625805 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-05-14 02:26:03.625816 | orchestrator | Wednesday 14 May 2025 02:25:53 +0000 (0:00:02.367) 0:00:54.960 ********* 2025-05-14 02:26:03.625826 | orchestrator | changed: [testbed-manager] 2025-05-14 02:26:03.625837 | orchestrator | 2025-05-14 02:26:03.625848 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-05-14 02:26:03.625858 | orchestrator | Wednesday 14 May 2025 02:25:54 +0000 (0:00:01.044) 0:00:56.005 ********* 2025-05-14 02:26:03.625869 | orchestrator | changed: [testbed-manager] 2025-05-14 02:26:03.625879 | orchestrator | 2025-05-14 02:26:03.625890 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-05-14 02:26:03.625901 | orchestrator | Wednesday 14 May 2025 02:25:54 +0000 (0:00:00.745) 0:00:56.751 ********* 2025-05-14 02:26:03.625911 | orchestrator | ok: [testbed-manager] 2025-05-14 02:26:03.625922 | orchestrator | 2025-05-14 02:26:03.625933 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:26:03.625943 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:26:03.625954 | orchestrator | 2025-05-14 02:26:03.625970 | orchestrator | Wednesday 14 May 2025 02:25:55 +0000 (0:00:00.495) 0:00:57.247 ********* 2025-05-14 02:26:03.625981 | orchestrator | =============================================================================== 2025-05-14 02:26:03.625992 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 45.11s 2025-05-14 02:26:03.626002 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.37s 2025-05-14 02:26:03.626088 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.90s 2025-05-14 02:26:03.626102 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.60s 2025-05-14 02:26:03.626113 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.40s 2025-05-14 02:26:03.626124 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.30s 2025-05-14 02:26:03.626135 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.04s 2025-05-14 02:26:03.626145 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.80s 2025-05-14 02:26:03.626156 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.75s 2025-05-14 02:26:03.626174 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.50s 2025-05-14 02:26:03.626185 | orchestrator | 2025-05-14 02:26:03.626234 | orchestrator | 2025-05-14 02:26:03.626246 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:26:03.626257 | orchestrator | 2025-05-14 02:26:03.626267 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:26:03.626278 | orchestrator | Wednesday 14 May 2025 02:24:58 +0000 (0:00:00.320) 0:00:00.320 ********* 2025-05-14 02:26:03.626288 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-05-14 02:26:03.626299 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-05-14 02:26:03.626310 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-05-14 02:26:03.626320 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-05-14 02:26:03.626331 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-05-14 02:26:03.626342 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-05-14 02:26:03.626352 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-05-14 02:26:03.626363 | orchestrator | 2025-05-14 02:26:03.626373 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-05-14 02:26:03.626384 | orchestrator | 2025-05-14 02:26:03.626395 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-05-14 02:26:03.626406 | orchestrator | Wednesday 14 May 2025 02:24:59 +0000 (0:00:01.687) 0:00:02.008 ********* 2025-05-14 02:26:03.626430 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:26:03.626444 | orchestrator | 2025-05-14 02:26:03.626455 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-05-14 02:26:03.626466 | orchestrator | Wednesday 14 May 2025 02:25:01 +0000 (0:00:01.893) 0:00:03.901 ********* 2025-05-14 02:26:03.626476 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:26:03.626487 | orchestrator | ok: [testbed-manager] 2025-05-14 02:26:03.626498 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:26:03.626508 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:26:03.626519 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:26:03.626530 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:26:03.626540 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:26:03.626551 | orchestrator | 2025-05-14 02:26:03.626562 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-05-14 02:26:03.626581 | orchestrator | Wednesday 14 May 2025 02:25:04 +0000 (0:00:02.435) 0:00:06.337 ********* 2025-05-14 02:26:03.626593 | orchestrator | ok: [testbed-manager] 2025-05-14 02:26:03.626604 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:26:03.626614 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:26:03.626625 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:26:03.626635 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:26:03.626646 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:26:03.626657 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:26:03.626683 | orchestrator | 2025-05-14 02:26:03.626694 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-05-14 02:26:03.626705 | orchestrator | Wednesday 14 May 2025 02:25:07 +0000 (0:00:03.194) 0:00:09.532 ********* 2025-05-14 02:26:03.626715 | orchestrator | changed: [testbed-manager] 2025-05-14 02:26:03.626726 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:26:03.626736 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:26:03.626747 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:26:03.626757 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:26:03.626792 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:26:03.626803 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:26:03.626814 | orchestrator | 2025-05-14 02:26:03.626825 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-05-14 02:26:03.626836 | orchestrator | Wednesday 14 May 2025 02:25:09 +0000 (0:00:02.327) 0:00:11.859 ********* 2025-05-14 02:26:03.626855 | orchestrator | changed: [testbed-manager] 2025-05-14 02:26:03.626866 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:26:03.626877 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:26:03.626887 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:26:03.626898 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:26:03.626915 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:26:03.626932 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:26:03.626951 | orchestrator | 2025-05-14 02:26:03.626968 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-05-14 02:26:03.626985 | orchestrator | Wednesday 14 May 2025 02:25:19 +0000 (0:00:09.330) 0:00:21.189 ********* 2025-05-14 02:26:03.627003 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:26:03.627019 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:26:03.627036 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:26:03.627053 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:26:03.627069 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:26:03.627101 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:26:03.627120 | orchestrator | changed: [testbed-manager] 2025-05-14 02:26:03.627138 | orchestrator | 2025-05-14 02:26:03.627155 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-05-14 02:26:03.627174 | orchestrator | Wednesday 14 May 2025 02:25:36 +0000 (0:00:17.554) 0:00:38.744 ********* 2025-05-14 02:26:03.627192 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:26:03.627213 | orchestrator | 2025-05-14 02:26:03.627232 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-05-14 02:26:03.627251 | orchestrator | Wednesday 14 May 2025 02:25:38 +0000 (0:00:01.810) 0:00:40.554 ********* 2025-05-14 02:26:03.627265 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-05-14 02:26:03.627276 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-05-14 02:26:03.627287 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-05-14 02:26:03.627297 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-05-14 02:26:03.627308 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-05-14 02:26:03.627319 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-05-14 02:26:03.627329 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-05-14 02:26:03.627340 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-05-14 02:26:03.627350 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-05-14 02:26:03.627360 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-05-14 02:26:03.627371 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-05-14 02:26:03.627381 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-05-14 02:26:03.627392 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-05-14 02:26:03.627403 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-05-14 02:26:03.627413 | orchestrator | 2025-05-14 02:26:03.627424 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-05-14 02:26:03.627435 | orchestrator | Wednesday 14 May 2025 02:25:44 +0000 (0:00:06.262) 0:00:46.817 ********* 2025-05-14 02:26:03.627446 | orchestrator | ok: [testbed-manager] 2025-05-14 02:26:03.627456 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:26:03.627467 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:26:03.627477 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:26:03.627488 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:26:03.627498 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:26:03.627509 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:26:03.627520 | orchestrator | 2025-05-14 02:26:03.627530 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-05-14 02:26:03.627550 | orchestrator | Wednesday 14 May 2025 02:25:46 +0000 (0:00:01.639) 0:00:48.457 ********* 2025-05-14 02:26:03.627562 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:26:03.627572 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:26:03.627583 | orchestrator | changed: [testbed-manager] 2025-05-14 02:26:03.627593 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:26:03.627604 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:26:03.627614 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:26:03.627625 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:26:03.627635 | orchestrator | 2025-05-14 02:26:03.627646 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-05-14 02:26:03.627656 | orchestrator | Wednesday 14 May 2025 02:25:49 +0000 (0:00:03.102) 0:00:51.559 ********* 2025-05-14 02:26:03.627667 | orchestrator | ok: [testbed-manager] 2025-05-14 02:26:03.627677 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:26:03.627688 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:26:03.627698 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:26:03.627718 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:26:03.627730 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:26:03.627740 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:26:03.627751 | orchestrator | 2025-05-14 02:26:03.627762 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-05-14 02:26:03.627856 | orchestrator | Wednesday 14 May 2025 02:25:51 +0000 (0:00:02.254) 0:00:53.814 ********* 2025-05-14 02:26:03.627868 | orchestrator | ok: [testbed-manager] 2025-05-14 02:26:03.627878 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:26:03.627889 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:26:03.627899 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:26:03.627915 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:26:03.627935 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:26:03.627954 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:26:03.627973 | orchestrator | 2025-05-14 02:26:03.627993 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-05-14 02:26:03.628007 | orchestrator | Wednesday 14 May 2025 02:25:54 +0000 (0:00:02.509) 0:00:56.323 ********* 2025-05-14 02:26:03.628017 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-05-14 02:26:03.628030 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:26:03.628042 | orchestrator | 2025-05-14 02:26:03.628053 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-05-14 02:26:03.628063 | orchestrator | Wednesday 14 May 2025 02:25:56 +0000 (0:00:02.370) 0:00:58.694 ********* 2025-05-14 02:26:03.628074 | orchestrator | changed: [testbed-manager] 2025-05-14 02:26:03.628085 | orchestrator | 2025-05-14 02:26:03.628095 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-05-14 02:26:03.628106 | orchestrator | Wednesday 14 May 2025 02:25:58 +0000 (0:00:02.311) 0:01:01.006 ********* 2025-05-14 02:26:03.628116 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:26:03.628127 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:26:03.628138 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:26:03.628148 | orchestrator | changed: [testbed-manager] 2025-05-14 02:26:03.628159 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:26:03.628169 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:26:03.628180 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:26:03.628190 | orchestrator | 2025-05-14 02:26:03.628201 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:26:03.628212 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:26:03.628223 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:26:03.628242 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:26:03.628253 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:26:03.628264 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:26:03.628274 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:26:03.628285 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:26:03.628296 | orchestrator | 2025-05-14 02:26:03.628306 | orchestrator | Wednesday 14 May 2025 02:26:02 +0000 (0:00:03.602) 0:01:04.608 ********* 2025-05-14 02:26:03.628318 | orchestrator | =============================================================================== 2025-05-14 02:26:03.628329 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 17.55s 2025-05-14 02:26:03.628339 | orchestrator | osism.services.netdata : Add repository --------------------------------- 9.33s 2025-05-14 02:26:03.628350 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 6.26s 2025-05-14 02:26:03.628361 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.60s 2025-05-14 02:26:03.628371 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.20s 2025-05-14 02:26:03.628382 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 3.10s 2025-05-14 02:26:03.628392 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.51s 2025-05-14 02:26:03.628403 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.44s 2025-05-14 02:26:03.628414 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 2.37s 2025-05-14 02:26:03.628424 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.33s 2025-05-14 02:26:03.628435 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.31s 2025-05-14 02:26:03.628445 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 2.25s 2025-05-14 02:26:03.628456 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.89s 2025-05-14 02:26:03.628467 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.81s 2025-05-14 02:26:03.628485 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.69s 2025-05-14 02:26:03.628497 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.64s 2025-05-14 02:26:03.628508 | orchestrator | 2025-05-14 02:26:03 | INFO  | Task 9ac8692d-eeac-4186-8f61-0b11254f5e5d is in state SUCCESS 2025-05-14 02:26:03.628610 | orchestrator | 2025-05-14 02:26:03 | INFO  | Task 7c051209-7d48-4378-86d2-ae0c41e4fa7c is in state STARTED 2025-05-14 02:26:03.628625 | orchestrator | 2025-05-14 02:26:03 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:26:03.631014 | orchestrator | 2025-05-14 02:26:03 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:26:03.631099 | orchestrator | 2025-05-14 02:26:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:26:06.693483 | orchestrator | 2025-05-14 02:26:06 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:26:06.693578 | orchestrator | 2025-05-14 02:26:06 | INFO  | Task 7c051209-7d48-4378-86d2-ae0c41e4fa7c is in state STARTED 2025-05-14 02:26:06.696736 | orchestrator | 2025-05-14 02:26:06 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:26:06.696829 | orchestrator | 2025-05-14 02:26:06 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:26:06.696844 | orchestrator | 2025-05-14 02:26:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:26:09.775458 | orchestrator | 2025-05-14 02:26:09 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:26:09.775750 | orchestrator | 2025-05-14 02:26:09 | INFO  | Task 7c051209-7d48-4378-86d2-ae0c41e4fa7c is in state STARTED 2025-05-14 02:26:09.776837 | orchestrator | 2025-05-14 02:26:09 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:26:09.778914 | orchestrator | 2025-05-14 02:26:09 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:26:09.778948 | orchestrator | 2025-05-14 02:26:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:26:12.827575 | orchestrator | 2025-05-14 02:26:12 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:26:12.827959 | orchestrator | 2025-05-14 02:26:12 | INFO  | Task 7c051209-7d48-4378-86d2-ae0c41e4fa7c is in state STARTED 2025-05-14 02:26:12.829929 | orchestrator | 2025-05-14 02:26:12 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:26:12.832058 | orchestrator | 2025-05-14 02:26:12 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:26:12.832098 | orchestrator | 2025-05-14 02:26:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:26:15.888850 | orchestrator | 2025-05-14 02:26:15 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:26:15.896593 | orchestrator | 2025-05-14 02:26:15 | INFO  | Task 7c051209-7d48-4378-86d2-ae0c41e4fa7c is in state STARTED 2025-05-14 02:26:15.898633 | orchestrator | 2025-05-14 02:26:15 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:26:15.899538 | orchestrator | 2025-05-14 02:26:15 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:26:15.899576 | orchestrator | 2025-05-14 02:26:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:26:18.955506 | orchestrator | 2025-05-14 02:26:18 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:26:18.959639 | orchestrator | 2025-05-14 02:26:18 | INFO  | Task 7c051209-7d48-4378-86d2-ae0c41e4fa7c is in state STARTED 2025-05-14 02:26:18.961132 | orchestrator | 2025-05-14 02:26:18 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:26:18.962534 | orchestrator | 2025-05-14 02:26:18 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:26:18.962759 | orchestrator | 2025-05-14 02:26:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:26:22.020197 | orchestrator | 2025-05-14 02:26:22 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:26:22.021989 | orchestrator | 2025-05-14 02:26:22 | INFO  | Task 7c051209-7d48-4378-86d2-ae0c41e4fa7c is in state STARTED 2025-05-14 02:26:22.025726 | orchestrator | 2025-05-14 02:26:22 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:26:22.028966 | orchestrator | 2025-05-14 02:26:22 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:26:22.029089 | orchestrator | 2025-05-14 02:26:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:26:25.103901 | orchestrator | 2025-05-14 02:26:25 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:26:25.103991 | orchestrator | 2025-05-14 02:26:25 | INFO  | Task 7c051209-7d48-4378-86d2-ae0c41e4fa7c is in state STARTED 2025-05-14 02:26:25.109673 | orchestrator | 2025-05-14 02:26:25 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:26:25.116256 | orchestrator | 2025-05-14 02:26:25 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:26:25.116328 | orchestrator | 2025-05-14 02:26:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:26:28.218475 | orchestrator | 2025-05-14 02:26:28 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:26:28.218608 | orchestrator | 2025-05-14 02:26:28 | INFO  | Task 7c051209-7d48-4378-86d2-ae0c41e4fa7c is in state STARTED 2025-05-14 02:26:28.219160 | orchestrator | 2025-05-14 02:26:28 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:26:28.220280 | orchestrator | 2025-05-14 02:26:28 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:26:28.224048 | orchestrator | 2025-05-14 02:26:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:26:31.281718 | orchestrator | 2025-05-14 02:26:31 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:26:31.283293 | orchestrator | 2025-05-14 02:26:31 | INFO  | Task 7c051209-7d48-4378-86d2-ae0c41e4fa7c is in state STARTED 2025-05-14 02:26:31.285080 | orchestrator | 2025-05-14 02:26:31 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:26:31.286445 | orchestrator | 2025-05-14 02:26:31 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:26:31.286563 | orchestrator | 2025-05-14 02:26:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:26:34.332005 | orchestrator | 2025-05-14 02:26:34 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:26:34.332569 | orchestrator | 2025-05-14 02:26:34 | INFO  | Task 7c051209-7d48-4378-86d2-ae0c41e4fa7c is in state STARTED 2025-05-14 02:26:34.333333 | orchestrator | 2025-05-14 02:26:34 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:26:34.337335 | orchestrator | 2025-05-14 02:26:34 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:26:34.337390 | orchestrator | 2025-05-14 02:26:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:26:37.379404 | orchestrator | 2025-05-14 02:26:37 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:26:37.379568 | orchestrator | 2025-05-14 02:26:37 | INFO  | Task 7c051209-7d48-4378-86d2-ae0c41e4fa7c is in state STARTED 2025-05-14 02:26:37.381138 | orchestrator | 2025-05-14 02:26:37 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:26:37.382136 | orchestrator | 2025-05-14 02:26:37 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:26:37.382172 | orchestrator | 2025-05-14 02:26:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:26:40.431544 | orchestrator | 2025-05-14 02:26:40 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:26:40.434184 | orchestrator | 2025-05-14 02:26:40 | INFO  | Task 7c051209-7d48-4378-86d2-ae0c41e4fa7c is in state STARTED 2025-05-14 02:26:40.435748 | orchestrator | 2025-05-14 02:26:40 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:26:40.435775 | orchestrator | 2025-05-14 02:26:40 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:26:40.435824 | orchestrator | 2025-05-14 02:26:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:26:43.463891 | orchestrator | 2025-05-14 02:26:43 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:26:43.465916 | orchestrator | 2025-05-14 02:26:43 | INFO  | Task 7c051209-7d48-4378-86d2-ae0c41e4fa7c is in state STARTED 2025-05-14 02:26:43.466895 | orchestrator | 2025-05-14 02:26:43 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:26:43.467415 | orchestrator | 2025-05-14 02:26:43 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:26:43.467495 | orchestrator | 2025-05-14 02:26:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:26:46.509975 | orchestrator | 2025-05-14 02:26:46 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:26:46.512840 | orchestrator | 2025-05-14 02:26:46 | INFO  | Task 7c051209-7d48-4378-86d2-ae0c41e4fa7c is in state STARTED 2025-05-14 02:26:46.514088 | orchestrator | 2025-05-14 02:26:46 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:26:46.515567 | orchestrator | 2025-05-14 02:26:46 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:26:46.515940 | orchestrator | 2025-05-14 02:26:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:26:49.592762 | orchestrator | 2025-05-14 02:26:49 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:26:49.597340 | orchestrator | 2025-05-14 02:26:49 | INFO  | Task 7c051209-7d48-4378-86d2-ae0c41e4fa7c is in state STARTED 2025-05-14 02:26:49.602351 | orchestrator | 2025-05-14 02:26:49 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:26:49.607575 | orchestrator | 2025-05-14 02:26:49 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:26:49.609821 | orchestrator | 2025-05-14 02:26:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:26:52.658095 | orchestrator | 2025-05-14 02:26:52 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:26:52.658194 | orchestrator | 2025-05-14 02:26:52 | INFO  | Task 7c051209-7d48-4378-86d2-ae0c41e4fa7c is in state SUCCESS 2025-05-14 02:26:52.658436 | orchestrator | 2025-05-14 02:26:52 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:26:52.660326 | orchestrator | 2025-05-14 02:26:52 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:26:52.660389 | orchestrator | 2025-05-14 02:26:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:26:55.700264 | orchestrator | 2025-05-14 02:26:55 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:26:55.702545 | orchestrator | 2025-05-14 02:26:55 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:26:55.703005 | orchestrator | 2025-05-14 02:26:55 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:26:55.703029 | orchestrator | 2025-05-14 02:26:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:26:58.730409 | orchestrator | 2025-05-14 02:26:58 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:26:58.731481 | orchestrator | 2025-05-14 02:26:58 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:26:58.732877 | orchestrator | 2025-05-14 02:26:58 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:26:58.732944 | orchestrator | 2025-05-14 02:26:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:27:01.772143 | orchestrator | 2025-05-14 02:27:01 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:27:01.772838 | orchestrator | 2025-05-14 02:27:01 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:27:01.775179 | orchestrator | 2025-05-14 02:27:01 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:27:01.775204 | orchestrator | 2025-05-14 02:27:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:27:04.810410 | orchestrator | 2025-05-14 02:27:04 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:27:04.812706 | orchestrator | 2025-05-14 02:27:04 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:27:04.812748 | orchestrator | 2025-05-14 02:27:04 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:27:04.812761 | orchestrator | 2025-05-14 02:27:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:27:07.853062 | orchestrator | 2025-05-14 02:27:07 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:27:07.854663 | orchestrator | 2025-05-14 02:27:07 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:27:07.854707 | orchestrator | 2025-05-14 02:27:07 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:27:07.854720 | orchestrator | 2025-05-14 02:27:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:27:10.901102 | orchestrator | 2025-05-14 02:27:10 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:27:10.905283 | orchestrator | 2025-05-14 02:27:10 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:27:10.909355 | orchestrator | 2025-05-14 02:27:10 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:27:10.909412 | orchestrator | 2025-05-14 02:27:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:27:13.984422 | orchestrator | 2025-05-14 02:27:13 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:27:13.984517 | orchestrator | 2025-05-14 02:27:13 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:27:13.984529 | orchestrator | 2025-05-14 02:27:13 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:27:13.986096 | orchestrator | 2025-05-14 02:27:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:27:17.042677 | orchestrator | 2025-05-14 02:27:17 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:27:17.043070 | orchestrator | 2025-05-14 02:27:17 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:27:17.044093 | orchestrator | 2025-05-14 02:27:17 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state STARTED 2025-05-14 02:27:17.044136 | orchestrator | 2025-05-14 02:27:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:27:20.090661 | orchestrator | 2025-05-14 02:27:20 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:27:20.091398 | orchestrator | 2025-05-14 02:27:20 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:27:20.095706 | orchestrator | 2025-05-14 02:27:20 | INFO  | Task 21eaadb7-f0a6-4bc1-84d0-8876c222f366 is in state SUCCESS 2025-05-14 02:27:20.097130 | orchestrator | 2025-05-14 02:27:20.097191 | orchestrator | 2025-05-14 02:27:20.097211 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-05-14 02:27:20.097231 | orchestrator | 2025-05-14 02:27:20.097247 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-05-14 02:27:20.097294 | orchestrator | Wednesday 14 May 2025 02:25:14 +0000 (0:00:00.228) 0:00:00.228 ********* 2025-05-14 02:27:20.097312 | orchestrator | ok: [testbed-manager] 2025-05-14 02:27:20.097330 | orchestrator | 2025-05-14 02:27:20.097347 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-05-14 02:27:20.097531 | orchestrator | Wednesday 14 May 2025 02:25:15 +0000 (0:00:00.989) 0:00:01.217 ********* 2025-05-14 02:27:20.097553 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-05-14 02:27:20.097572 | orchestrator | 2025-05-14 02:27:20.097590 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-05-14 02:27:20.097608 | orchestrator | Wednesday 14 May 2025 02:25:15 +0000 (0:00:00.771) 0:00:01.988 ********* 2025-05-14 02:27:20.097626 | orchestrator | changed: [testbed-manager] 2025-05-14 02:27:20.097645 | orchestrator | 2025-05-14 02:27:20.097663 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-05-14 02:27:20.097682 | orchestrator | Wednesday 14 May 2025 02:25:17 +0000 (0:00:01.513) 0:00:03.502 ********* 2025-05-14 02:27:20.097700 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-05-14 02:27:20.097718 | orchestrator | ok: [testbed-manager] 2025-05-14 02:27:20.097736 | orchestrator | 2025-05-14 02:27:20.097754 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-05-14 02:27:20.097773 | orchestrator | Wednesday 14 May 2025 02:26:41 +0000 (0:01:23.990) 0:01:27.492 ********* 2025-05-14 02:27:20.097791 | orchestrator | changed: [testbed-manager] 2025-05-14 02:27:20.097842 | orchestrator | 2025-05-14 02:27:20.097860 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:27:20.097878 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:27:20.097898 | orchestrator | 2025-05-14 02:27:20.097915 | orchestrator | Wednesday 14 May 2025 02:26:49 +0000 (0:00:08.289) 0:01:35.782 ********* 2025-05-14 02:27:20.097933 | orchestrator | =============================================================================== 2025-05-14 02:27:20.097951 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 83.99s 2025-05-14 02:27:20.097968 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 8.29s 2025-05-14 02:27:20.097985 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.51s 2025-05-14 02:27:20.098001 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.99s 2025-05-14 02:27:20.098072 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.77s 2025-05-14 02:27:20.098097 | orchestrator | 2025-05-14 02:27:20.098115 | orchestrator | 2025-05-14 02:27:20.098283 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-05-14 02:27:20.098305 | orchestrator | 2025-05-14 02:27:20.098322 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-05-14 02:27:20.098341 | orchestrator | Wednesday 14 May 2025 02:24:52 +0000 (0:00:00.409) 0:00:00.409 ********* 2025-05-14 02:27:20.098359 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:27:20.098380 | orchestrator | 2025-05-14 02:27:20.098398 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-05-14 02:27:20.098416 | orchestrator | Wednesday 14 May 2025 02:24:54 +0000 (0:00:01.931) 0:00:02.341 ********* 2025-05-14 02:27:20.098434 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-14 02:27:20.098452 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-14 02:27:20.098469 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-14 02:27:20.098486 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-14 02:27:20.098504 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-14 02:27:20.098537 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-14 02:27:20.098554 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-14 02:27:20.098571 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-14 02:27:20.098589 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-14 02:27:20.098605 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-14 02:27:20.098622 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-14 02:27:20.098639 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-14 02:27:20.098655 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-14 02:27:20.098672 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-14 02:27:20.098690 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-14 02:27:20.098707 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-14 02:27:20.098735 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-14 02:27:20.098772 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-14 02:27:20.098791 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-14 02:27:20.098917 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-14 02:27:20.098938 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-14 02:27:20.098956 | orchestrator | 2025-05-14 02:27:20.098974 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-05-14 02:27:20.098992 | orchestrator | Wednesday 14 May 2025 02:24:58 +0000 (0:00:04.566) 0:00:06.907 ********* 2025-05-14 02:27:20.099010 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:27:20.099030 | orchestrator | 2025-05-14 02:27:20.099047 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-05-14 02:27:20.099065 | orchestrator | Wednesday 14 May 2025 02:25:00 +0000 (0:00:01.799) 0:00:08.706 ********* 2025-05-14 02:27:20.099089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:27:20.099114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:27:20.099134 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:27:20.099170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:27:20.099189 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:27:20.099223 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:27:20.099258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.099278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.099297 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.099315 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:27:20.099345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.099362 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.099390 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.099416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.099437 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.099453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.099480 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.099512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.099529 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.099546 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.099564 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.099580 | orchestrator | 2025-05-14 02:27:20.099599 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-05-14 02:27:20.099624 | orchestrator | Wednesday 14 May 2025 02:25:05 +0000 (0:00:04.840) 0:00:13.547 ********* 2025-05-14 02:27:20.099655 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-14 02:27:20.099675 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:27:20.099696 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:27:20.099723 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:27:20.099742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-14 02:27:20.099761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:27:20.099779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:27:20.099795 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:27:20.099844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-14 02:27:20.099889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:27:20.099910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:27:20.099927 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:27:20.099944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-14 02:27:20.099974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:27:20.099992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:27:20.100009 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:27:20.100027 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-14 02:27:20.100046 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:27:20.100070 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:27:20.100089 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:27:20.100119 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-14 02:27:20.100137 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:27:20.100166 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:27:20.100184 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:27:20.100202 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-14 02:27:20.100220 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:27:20.100237 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:27:20.100254 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:27:20.100270 | orchestrator | 2025-05-14 02:27:20.100288 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-05-14 02:27:20.100305 | orchestrator | Wednesday 14 May 2025 02:25:07 +0000 (0:00:01.787) 0:00:15.335 ********* 2025-05-14 02:27:20.100323 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-14 02:27:20.100359 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:27:20.100379 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:27:20.100408 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:27:20.100426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-14 02:27:20.100445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:27:20.100464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:27:20.100482 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:27:20.100500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-14 02:27:20.100520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:27:20.100554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:27:20.100574 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:27:20.100593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-14 02:27:20.100621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:27:20.100640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:27:20.100659 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-14 02:27:20.100678 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:27:20.100697 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:27:20.100715 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:27:20.100734 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:27:20.100752 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-14 02:27:20.100786 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:27:20.100864 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:27:20.100885 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:27:20.100902 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-14 02:27:20.100920 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:27:20.100938 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:27:20.100956 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:27:20.100974 | orchestrator | 2025-05-14 02:27:20.100991 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-05-14 02:27:20.101008 | orchestrator | Wednesday 14 May 2025 02:25:10 +0000 (0:00:03.321) 0:00:18.656 ********* 2025-05-14 02:27:20.101026 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:27:20.101043 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:27:20.101059 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:27:20.101075 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:27:20.101092 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:27:20.101108 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:27:20.101125 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:27:20.101141 | orchestrator | 2025-05-14 02:27:20.101159 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-05-14 02:27:20.101175 | orchestrator | Wednesday 14 May 2025 02:25:11 +0000 (0:00:00.881) 0:00:19.538 ********* 2025-05-14 02:27:20.101193 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:27:20.101211 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:27:20.101229 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:27:20.101248 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:27:20.101266 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:27:20.101295 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:27:20.101313 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:27:20.101331 | orchestrator | 2025-05-14 02:27:20.101348 | orchestrator | TASK [common : Ensure fluentd image is present for label check] **************** 2025-05-14 02:27:20.101366 | orchestrator | Wednesday 14 May 2025 02:25:12 +0000 (0:00:00.896) 0:00:20.435 ********* 2025-05-14 02:27:20.101382 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:27:20.101399 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:27:20.101416 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:27:20.101434 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:27:20.101451 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:27:20.101468 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:27:20.101485 | orchestrator | changed: [testbed-manager] 2025-05-14 02:27:20.101502 | orchestrator | 2025-05-14 02:27:20.101520 | orchestrator | TASK [common : Fetch fluentd Docker image labels] ****************************** 2025-05-14 02:27:20.101537 | orchestrator | Wednesday 14 May 2025 02:25:45 +0000 (0:00:33.343) 0:00:53.778 ********* 2025-05-14 02:27:20.101554 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:27:20.101584 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:27:20.101602 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:27:20.101619 | orchestrator | ok: [testbed-manager] 2025-05-14 02:27:20.101636 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:27:20.101653 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:27:20.101670 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:27:20.101687 | orchestrator | 2025-05-14 02:27:20.101704 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-05-14 02:27:20.101721 | orchestrator | Wednesday 14 May 2025 02:25:48 +0000 (0:00:02.886) 0:00:56.665 ********* 2025-05-14 02:27:20.101739 | orchestrator | ok: [testbed-manager] 2025-05-14 02:27:20.101755 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:27:20.101770 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:27:20.101784 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:27:20.101829 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:27:20.101847 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:27:20.101862 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:27:20.101878 | orchestrator | 2025-05-14 02:27:20.101895 | orchestrator | TASK [common : Fetch fluentd Podman image labels] ****************************** 2025-05-14 02:27:20.101912 | orchestrator | Wednesday 14 May 2025 02:25:49 +0000 (0:00:01.363) 0:00:58.029 ********* 2025-05-14 02:27:20.101929 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:27:20.101946 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:27:20.101961 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:27:20.101978 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:27:20.101994 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:27:20.102010 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:27:20.102080 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:27:20.102099 | orchestrator | 2025-05-14 02:27:20.102117 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-05-14 02:27:20.102136 | orchestrator | Wednesday 14 May 2025 02:25:51 +0000 (0:00:01.088) 0:00:59.117 ********* 2025-05-14 02:27:20.102153 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:27:20.102171 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:27:20.102188 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:27:20.102204 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:27:20.102221 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:27:20.102238 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:27:20.102256 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:27:20.102273 | orchestrator | 2025-05-14 02:27:20.102290 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-05-14 02:27:20.102307 | orchestrator | Wednesday 14 May 2025 02:25:51 +0000 (0:00:00.895) 0:01:00.013 ********* 2025-05-14 02:27:20.102326 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:27:20.102364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:27:20.102383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:27:20.102412 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.102451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:27:20.102470 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:27:20.102489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.102509 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:27:20.102537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.102556 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.102575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.102623 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:27:20.102641 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.102660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.102678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.102706 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.102726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.102744 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.102763 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.102830 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.102850 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.102867 | orchestrator | 2025-05-14 02:27:20.102885 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-05-14 02:27:20.102903 | orchestrator | Wednesday 14 May 2025 02:25:57 +0000 (0:00:05.989) 0:01:06.002 ********* 2025-05-14 02:27:20.102921 | orchestrator | [WARNING]: Skipped 2025-05-14 02:27:20.102938 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-05-14 02:27:20.102956 | orchestrator | to this access issue: 2025-05-14 02:27:20.102974 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-05-14 02:27:20.102991 | orchestrator | directory 2025-05-14 02:27:20.103008 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-14 02:27:20.103034 | orchestrator | 2025-05-14 02:27:20.103051 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-05-14 02:27:20.103069 | orchestrator | Wednesday 14 May 2025 02:25:58 +0000 (0:00:00.773) 0:01:06.775 ********* 2025-05-14 02:27:20.103085 | orchestrator | [WARNING]: Skipped 2025-05-14 02:27:20.103100 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-05-14 02:27:20.103116 | orchestrator | to this access issue: 2025-05-14 02:27:20.103132 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-05-14 02:27:20.103148 | orchestrator | directory 2025-05-14 02:27:20.103164 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-14 02:27:20.103181 | orchestrator | 2025-05-14 02:27:20.103197 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-05-14 02:27:20.103213 | orchestrator | Wednesday 14 May 2025 02:25:59 +0000 (0:00:00.996) 0:01:07.772 ********* 2025-05-14 02:27:20.103230 | orchestrator | [WARNING]: Skipped 2025-05-14 02:27:20.103248 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-05-14 02:27:20.103266 | orchestrator | to this access issue: 2025-05-14 02:27:20.103283 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-05-14 02:27:20.103301 | orchestrator | directory 2025-05-14 02:27:20.103318 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-14 02:27:20.103336 | orchestrator | 2025-05-14 02:27:20.103354 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-05-14 02:27:20.103369 | orchestrator | Wednesday 14 May 2025 02:26:00 +0000 (0:00:00.502) 0:01:08.274 ********* 2025-05-14 02:27:20.103385 | orchestrator | [WARNING]: Skipped 2025-05-14 02:27:20.103401 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-05-14 02:27:20.103417 | orchestrator | to this access issue: 2025-05-14 02:27:20.103434 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-05-14 02:27:20.103452 | orchestrator | directory 2025-05-14 02:27:20.103470 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-14 02:27:20.103488 | orchestrator | 2025-05-14 02:27:20.103506 | orchestrator | TASK [common : Copying over td-agent.conf] ************************************* 2025-05-14 02:27:20.103523 | orchestrator | Wednesday 14 May 2025 02:26:00 +0000 (0:00:00.750) 0:01:09.024 ********* 2025-05-14 02:27:20.103541 | orchestrator | changed: [testbed-manager] 2025-05-14 02:27:20.103559 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:27:20.103577 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:27:20.103595 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:27:20.103613 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:27:20.103629 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:27:20.103646 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:27:20.103664 | orchestrator | 2025-05-14 02:27:20.103682 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-05-14 02:27:20.103700 | orchestrator | Wednesday 14 May 2025 02:26:06 +0000 (0:00:05.287) 0:01:14.312 ********* 2025-05-14 02:27:20.103718 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-14 02:27:20.103736 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-14 02:27:20.103753 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-14 02:27:20.103769 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-14 02:27:20.103786 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-14 02:27:20.103873 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-14 02:27:20.103897 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-14 02:27:20.103934 | orchestrator | 2025-05-14 02:27:20.103974 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-05-14 02:27:20.103993 | orchestrator | Wednesday 14 May 2025 02:26:10 +0000 (0:00:04.613) 0:01:18.926 ********* 2025-05-14 02:27:20.104010 | orchestrator | changed: [testbed-manager] 2025-05-14 02:27:20.104027 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:27:20.104054 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:27:20.104071 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:27:20.104088 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:27:20.104120 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:27:20.104141 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:27:20.104159 | orchestrator | 2025-05-14 02:27:20.104178 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-05-14 02:27:20.104195 | orchestrator | Wednesday 14 May 2025 02:26:14 +0000 (0:00:03.379) 0:01:22.306 ********* 2025-05-14 02:27:20.104215 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:27:20.104236 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:27:20.104255 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:27:20.104274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:27:20.104293 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:27:20.104312 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.104365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:27:20.104388 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.104406 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.104424 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:27:20.104442 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:27:20.104461 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:27:20.104481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:27:20.104510 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:27:20.105659 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:27:20.105717 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.105739 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:27:20.105758 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:27:20.105777 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.105796 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.105861 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.105881 | orchestrator | 2025-05-14 02:27:20.105900 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-05-14 02:27:20.105919 | orchestrator | Wednesday 14 May 2025 02:26:17 +0000 (0:00:02.814) 0:01:25.120 ********* 2025-05-14 02:27:20.105937 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-14 02:27:20.105955 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-14 02:27:20.105970 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-14 02:27:20.105987 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-14 02:27:20.106003 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-14 02:27:20.106079 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-14 02:27:20.106109 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-14 02:27:20.106127 | orchestrator | 2025-05-14 02:27:20.106144 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-05-14 02:27:20.106186 | orchestrator | Wednesday 14 May 2025 02:26:21 +0000 (0:00:04.074) 0:01:29.194 ********* 2025-05-14 02:27:20.106206 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-14 02:27:20.106223 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-14 02:27:20.106241 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-14 02:27:20.106260 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-14 02:27:20.106278 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-14 02:27:20.106298 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-14 02:27:20.106320 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-14 02:27:20.106341 | orchestrator | 2025-05-14 02:27:20.106361 | orchestrator | TASK [common : Check common containers] **************************************** 2025-05-14 02:27:20.106380 | orchestrator | Wednesday 14 May 2025 02:26:24 +0000 (0:00:03.714) 0:01:32.909 ********* 2025-05-14 02:27:20.106401 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:27:20.106422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:27:20.106455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:27:20.106477 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.106496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:27:20.106531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.106550 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.106568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.106586 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:27:20.106615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.106632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.106650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.106668 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:27:20.106697 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.106725 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-14 02:27:20.106744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.106763 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.106792 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.106837 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.106855 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.106873 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:27:20.106891 | orchestrator | 2025-05-14 02:27:20.106908 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-05-14 02:27:20.106932 | orchestrator | Wednesday 14 May 2025 02:26:29 +0000 (0:00:04.705) 0:01:37.615 ********* 2025-05-14 02:27:20.106949 | orchestrator | changed: [testbed-manager] 2025-05-14 02:27:20.106976 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:27:20.106993 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:27:20.107009 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:27:20.107025 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:27:20.107041 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:27:20.107056 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:27:20.107072 | orchestrator | 2025-05-14 02:27:20.107089 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-05-14 02:27:20.107105 | orchestrator | Wednesday 14 May 2025 02:26:31 +0000 (0:00:01.785) 0:01:39.400 ********* 2025-05-14 02:27:20.107122 | orchestrator | changed: [testbed-manager] 2025-05-14 02:27:20.107138 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:27:20.107155 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:27:20.107171 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:27:20.107188 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:27:20.107204 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:27:20.107222 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:27:20.107239 | orchestrator | 2025-05-14 02:27:20.107257 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-14 02:27:20.107274 | orchestrator | Wednesday 14 May 2025 02:26:32 +0000 (0:00:01.494) 0:01:40.894 ********* 2025-05-14 02:27:20.107306 | orchestrator | 2025-05-14 02:27:20.107324 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-14 02:27:20.107342 | orchestrator | Wednesday 14 May 2025 02:26:32 +0000 (0:00:00.061) 0:01:40.956 ********* 2025-05-14 02:27:20.107359 | orchestrator | 2025-05-14 02:27:20.107374 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-14 02:27:20.107390 | orchestrator | Wednesday 14 May 2025 02:26:32 +0000 (0:00:00.053) 0:01:41.010 ********* 2025-05-14 02:27:20.107406 | orchestrator | 2025-05-14 02:27:20.107422 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-14 02:27:20.107439 | orchestrator | Wednesday 14 May 2025 02:26:33 +0000 (0:00:00.057) 0:01:41.067 ********* 2025-05-14 02:27:20.107456 | orchestrator | 2025-05-14 02:27:20.107473 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-14 02:27:20.107491 | orchestrator | Wednesday 14 May 2025 02:26:33 +0000 (0:00:00.265) 0:01:41.333 ********* 2025-05-14 02:27:20.107509 | orchestrator | 2025-05-14 02:27:20.107526 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-14 02:27:20.107543 | orchestrator | Wednesday 14 May 2025 02:26:33 +0000 (0:00:00.062) 0:01:41.395 ********* 2025-05-14 02:27:20.107561 | orchestrator | 2025-05-14 02:27:20.107579 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-14 02:27:20.107597 | orchestrator | Wednesday 14 May 2025 02:26:33 +0000 (0:00:00.055) 0:01:41.451 ********* 2025-05-14 02:27:20.107615 | orchestrator | 2025-05-14 02:27:20.107633 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-05-14 02:27:20.107651 | orchestrator | Wednesday 14 May 2025 02:26:33 +0000 (0:00:00.073) 0:01:41.524 ********* 2025-05-14 02:27:20.107670 | orchestrator | changed: [testbed-manager] 2025-05-14 02:27:20.107687 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:27:20.107704 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:27:20.107720 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:27:20.107737 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:27:20.107754 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:27:20.107771 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:27:20.107789 | orchestrator | 2025-05-14 02:27:20.107895 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-05-14 02:27:20.107915 | orchestrator | Wednesday 14 May 2025 02:26:42 +0000 (0:00:08.697) 0:01:50.222 ********* 2025-05-14 02:27:20.107933 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:27:20.107949 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:27:20.107966 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:27:20.107983 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:27:20.108000 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:27:20.108018 | orchestrator | changed: [testbed-manager] 2025-05-14 02:27:20.108036 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:27:20.108054 | orchestrator | 2025-05-14 02:27:20.108072 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-05-14 02:27:20.108091 | orchestrator | Wednesday 14 May 2025 02:27:08 +0000 (0:00:26.813) 0:02:17.035 ********* 2025-05-14 02:27:20.108109 | orchestrator | ok: [testbed-manager] 2025-05-14 02:27:20.108126 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:27:20.108144 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:27:20.108161 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:27:20.108178 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:27:20.108194 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:27:20.108210 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:27:20.108227 | orchestrator | 2025-05-14 02:27:20.108243 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-05-14 02:27:20.108262 | orchestrator | Wednesday 14 May 2025 02:27:11 +0000 (0:00:02.337) 0:02:19.373 ********* 2025-05-14 02:27:20.108279 | orchestrator | changed: [testbed-manager] 2025-05-14 02:27:20.108296 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:27:20.108313 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:27:20.108346 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:27:20.108365 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:27:20.108383 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:27:20.108400 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:27:20.108413 | orchestrator | 2025-05-14 02:27:20.108428 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:27:20.108443 | orchestrator | testbed-manager : ok=25  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-14 02:27:20.108458 | orchestrator | testbed-node-0 : ok=21  changed=14  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-14 02:27:20.108481 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-14 02:27:20.108510 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-14 02:27:20.108525 | orchestrator | testbed-node-3 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-14 02:27:20.108539 | orchestrator | testbed-node-4 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-14 02:27:20.108553 | orchestrator | testbed-node-5 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-14 02:27:20.108568 | orchestrator | 2025-05-14 02:27:20.108582 | orchestrator | 2025-05-14 02:27:20.108596 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:27:20.108611 | orchestrator | Wednesday 14 May 2025 02:27:19 +0000 (0:00:07.833) 0:02:27.206 ********* 2025-05-14 02:27:20.108625 | orchestrator | =============================================================================== 2025-05-14 02:27:20.108639 | orchestrator | common : Ensure fluentd image is present for label check --------------- 33.34s 2025-05-14 02:27:20.108653 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 26.81s 2025-05-14 02:27:20.108667 | orchestrator | common : Restart fluentd container -------------------------------------- 8.70s 2025-05-14 02:27:20.108680 | orchestrator | common : Restart cron container ----------------------------------------- 7.83s 2025-05-14 02:27:20.108694 | orchestrator | common : Copying over config.json files for services -------------------- 5.99s 2025-05-14 02:27:20.108708 | orchestrator | common : Copying over td-agent.conf ------------------------------------- 5.29s 2025-05-14 02:27:20.108721 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.84s 2025-05-14 02:27:20.108735 | orchestrator | common : Check common containers ---------------------------------------- 4.71s 2025-05-14 02:27:20.108749 | orchestrator | common : Copying over cron logrotate config file ------------------------ 4.61s 2025-05-14 02:27:20.108763 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.57s 2025-05-14 02:27:20.108776 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 4.07s 2025-05-14 02:27:20.108790 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.71s 2025-05-14 02:27:20.108828 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.38s 2025-05-14 02:27:20.108843 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.32s 2025-05-14 02:27:20.108857 | orchestrator | common : Fetch fluentd Docker image labels ------------------------------ 2.89s 2025-05-14 02:27:20.108871 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.81s 2025-05-14 02:27:20.108886 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.34s 2025-05-14 02:27:20.108900 | orchestrator | common : include_tasks -------------------------------------------------- 1.93s 2025-05-14 02:27:20.108925 | orchestrator | common : include_tasks -------------------------------------------------- 1.80s 2025-05-14 02:27:20.108939 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.79s 2025-05-14 02:27:20.108954 | orchestrator | 2025-05-14 02:27:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:27:23.126282 | orchestrator | 2025-05-14 02:27:23 | INFO  | Task f50c0b4c-0f14-4001-805d-be14801fb06e is in state STARTED 2025-05-14 02:27:23.127193 | orchestrator | 2025-05-14 02:27:23 | INFO  | Task f10398bf-f4a9-4aee-9d15-99f47b9b88dd is in state STARTED 2025-05-14 02:27:23.129026 | orchestrator | 2025-05-14 02:27:23 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:27:23.129596 | orchestrator | 2025-05-14 02:27:23 | INFO  | Task 5b333b43-92ca-43c0-b0d7-ba1dd9a30cca is in state STARTED 2025-05-14 02:27:23.130713 | orchestrator | 2025-05-14 02:27:23 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:27:23.131347 | orchestrator | 2025-05-14 02:27:23 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:27:23.131381 | orchestrator | 2025-05-14 02:27:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:27:26.174246 | orchestrator | 2025-05-14 02:27:26 | INFO  | Task f50c0b4c-0f14-4001-805d-be14801fb06e is in state STARTED 2025-05-14 02:27:26.174515 | orchestrator | 2025-05-14 02:27:26 | INFO  | Task f10398bf-f4a9-4aee-9d15-99f47b9b88dd is in state STARTED 2025-05-14 02:27:26.174834 | orchestrator | 2025-05-14 02:27:26 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:27:26.175564 | orchestrator | 2025-05-14 02:27:26 | INFO  | Task 5b333b43-92ca-43c0-b0d7-ba1dd9a30cca is in state STARTED 2025-05-14 02:27:26.175799 | orchestrator | 2025-05-14 02:27:26 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:27:26.176477 | orchestrator | 2025-05-14 02:27:26 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:27:26.176902 | orchestrator | 2025-05-14 02:27:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:27:29.215765 | orchestrator | 2025-05-14 02:27:29 | INFO  | Task f50c0b4c-0f14-4001-805d-be14801fb06e is in state STARTED 2025-05-14 02:27:29.218903 | orchestrator | 2025-05-14 02:27:29 | INFO  | Task f10398bf-f4a9-4aee-9d15-99f47b9b88dd is in state STARTED 2025-05-14 02:27:29.219362 | orchestrator | 2025-05-14 02:27:29 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:27:29.221137 | orchestrator | 2025-05-14 02:27:29 | INFO  | Task 5b333b43-92ca-43c0-b0d7-ba1dd9a30cca is in state STARTED 2025-05-14 02:27:29.221953 | orchestrator | 2025-05-14 02:27:29 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:27:29.223496 | orchestrator | 2025-05-14 02:27:29 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:27:29.223602 | orchestrator | 2025-05-14 02:27:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:27:32.263202 | orchestrator | 2025-05-14 02:27:32 | INFO  | Task f50c0b4c-0f14-4001-805d-be14801fb06e is in state STARTED 2025-05-14 02:27:32.263637 | orchestrator | 2025-05-14 02:27:32 | INFO  | Task f10398bf-f4a9-4aee-9d15-99f47b9b88dd is in state STARTED 2025-05-14 02:27:32.264944 | orchestrator | 2025-05-14 02:27:32 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:27:32.267299 | orchestrator | 2025-05-14 02:27:32 | INFO  | Task 5b333b43-92ca-43c0-b0d7-ba1dd9a30cca is in state STARTED 2025-05-14 02:27:32.270796 | orchestrator | 2025-05-14 02:27:32 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:27:32.271517 | orchestrator | 2025-05-14 02:27:32 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:27:32.271543 | orchestrator | 2025-05-14 02:27:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:27:35.317865 | orchestrator | 2025-05-14 02:27:35 | INFO  | Task f50c0b4c-0f14-4001-805d-be14801fb06e is in state STARTED 2025-05-14 02:27:35.317952 | orchestrator | 2025-05-14 02:27:35 | INFO  | Task f10398bf-f4a9-4aee-9d15-99f47b9b88dd is in state STARTED 2025-05-14 02:27:35.321096 | orchestrator | 2025-05-14 02:27:35 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:27:35.323018 | orchestrator | 2025-05-14 02:27:35 | INFO  | Task 5b333b43-92ca-43c0-b0d7-ba1dd9a30cca is in state STARTED 2025-05-14 02:27:35.325680 | orchestrator | 2025-05-14 02:27:35 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:27:35.328175 | orchestrator | 2025-05-14 02:27:35 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:27:35.328216 | orchestrator | 2025-05-14 02:27:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:27:38.374363 | orchestrator | 2025-05-14 02:27:38 | INFO  | Task f50c0b4c-0f14-4001-805d-be14801fb06e is in state STARTED 2025-05-14 02:27:38.381400 | orchestrator | 2025-05-14 02:27:38 | INFO  | Task f10398bf-f4a9-4aee-9d15-99f47b9b88dd is in state STARTED 2025-05-14 02:27:38.381473 | orchestrator | 2025-05-14 02:27:38 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:27:38.381704 | orchestrator | 2025-05-14 02:27:38 | INFO  | Task 5b333b43-92ca-43c0-b0d7-ba1dd9a30cca is in state STARTED 2025-05-14 02:27:38.384431 | orchestrator | 2025-05-14 02:27:38 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:27:38.384929 | orchestrator | 2025-05-14 02:27:38 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:27:38.384952 | orchestrator | 2025-05-14 02:27:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:27:41.456512 | orchestrator | 2025-05-14 02:27:41 | INFO  | Task f50c0b4c-0f14-4001-805d-be14801fb06e is in state SUCCESS 2025-05-14 02:27:41.456666 | orchestrator | 2025-05-14 02:27:41 | INFO  | Task f10398bf-f4a9-4aee-9d15-99f47b9b88dd is in state STARTED 2025-05-14 02:27:41.457552 | orchestrator | 2025-05-14 02:27:41 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:27:41.458506 | orchestrator | 2025-05-14 02:27:41 | INFO  | Task 5b333b43-92ca-43c0-b0d7-ba1dd9a30cca is in state STARTED 2025-05-14 02:27:41.458946 | orchestrator | 2025-05-14 02:27:41 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:27:41.459517 | orchestrator | 2025-05-14 02:27:41 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:27:41.459805 | orchestrator | 2025-05-14 02:27:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:27:44.490217 | orchestrator | 2025-05-14 02:27:44 | INFO  | Task f10398bf-f4a9-4aee-9d15-99f47b9b88dd is in state STARTED 2025-05-14 02:27:44.490442 | orchestrator | 2025-05-14 02:27:44 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:27:44.491139 | orchestrator | 2025-05-14 02:27:44 | INFO  | Task 5b333b43-92ca-43c0-b0d7-ba1dd9a30cca is in state STARTED 2025-05-14 02:27:44.491692 | orchestrator | 2025-05-14 02:27:44 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:27:44.492292 | orchestrator | 2025-05-14 02:27:44 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:27:44.493123 | orchestrator | 2025-05-14 02:27:44 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:27:44.493147 | orchestrator | 2025-05-14 02:27:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:27:47.537261 | orchestrator | 2025-05-14 02:27:47 | INFO  | Task f10398bf-f4a9-4aee-9d15-99f47b9b88dd is in state STARTED 2025-05-14 02:27:47.539200 | orchestrator | 2025-05-14 02:27:47 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:27:47.540173 | orchestrator | 2025-05-14 02:27:47 | INFO  | Task 5b333b43-92ca-43c0-b0d7-ba1dd9a30cca is in state STARTED 2025-05-14 02:27:47.545241 | orchestrator | 2025-05-14 02:27:47 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:27:47.545930 | orchestrator | 2025-05-14 02:27:47 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:27:47.547058 | orchestrator | 2025-05-14 02:27:47 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:27:47.547139 | orchestrator | 2025-05-14 02:27:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:27:50.603548 | orchestrator | 2025-05-14 02:27:50 | INFO  | Task f10398bf-f4a9-4aee-9d15-99f47b9b88dd is in state STARTED 2025-05-14 02:27:50.603874 | orchestrator | 2025-05-14 02:27:50 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:27:50.604639 | orchestrator | 2025-05-14 02:27:50 | INFO  | Task 5b333b43-92ca-43c0-b0d7-ba1dd9a30cca is in state STARTED 2025-05-14 02:27:50.605531 | orchestrator | 2025-05-14 02:27:50 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:27:50.606426 | orchestrator | 2025-05-14 02:27:50 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:27:50.607240 | orchestrator | 2025-05-14 02:27:50 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:27:50.607291 | orchestrator | 2025-05-14 02:27:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:27:53.647272 | orchestrator | 2025-05-14 02:27:53 | INFO  | Task f10398bf-f4a9-4aee-9d15-99f47b9b88dd is in state STARTED 2025-05-14 02:27:53.647355 | orchestrator | 2025-05-14 02:27:53 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:27:53.648901 | orchestrator | 2025-05-14 02:27:53 | INFO  | Task 5b333b43-92ca-43c0-b0d7-ba1dd9a30cca is in state STARTED 2025-05-14 02:27:53.648940 | orchestrator | 2025-05-14 02:27:53 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:27:53.648949 | orchestrator | 2025-05-14 02:27:53 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:27:53.649276 | orchestrator | 2025-05-14 02:27:53 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:27:53.649296 | orchestrator | 2025-05-14 02:27:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:27:56.684674 | orchestrator | 2025-05-14 02:27:56 | INFO  | Task f10398bf-f4a9-4aee-9d15-99f47b9b88dd is in state STARTED 2025-05-14 02:27:56.685117 | orchestrator | 2025-05-14 02:27:56 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:27:56.686113 | orchestrator | 2025-05-14 02:27:56 | INFO  | Task 5b333b43-92ca-43c0-b0d7-ba1dd9a30cca is in state STARTED 2025-05-14 02:27:56.686717 | orchestrator | 2025-05-14 02:27:56 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:27:56.687530 | orchestrator | 2025-05-14 02:27:56 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:27:56.688512 | orchestrator | 2025-05-14 02:27:56 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:27:56.688567 | orchestrator | 2025-05-14 02:27:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:27:59.730327 | orchestrator | 2025-05-14 02:27:59 | INFO  | Task f10398bf-f4a9-4aee-9d15-99f47b9b88dd is in state SUCCESS 2025-05-14 02:27:59.731404 | orchestrator | 2025-05-14 02:27:59.731449 | orchestrator | 2025-05-14 02:27:59.731457 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:27:59.731465 | orchestrator | 2025-05-14 02:27:59.731472 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:27:59.731479 | orchestrator | Wednesday 14 May 2025 02:27:23 +0000 (0:00:00.584) 0:00:00.584 ********* 2025-05-14 02:27:59.731486 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:27:59.731493 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:27:59.731499 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:27:59.731505 | orchestrator | 2025-05-14 02:27:59.731512 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:27:59.731518 | orchestrator | Wednesday 14 May 2025 02:27:23 +0000 (0:00:00.422) 0:00:01.007 ********* 2025-05-14 02:27:59.731524 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-05-14 02:27:59.731531 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-05-14 02:27:59.731537 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-05-14 02:27:59.731543 | orchestrator | 2025-05-14 02:27:59.731549 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-05-14 02:27:59.731556 | orchestrator | 2025-05-14 02:27:59.731562 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-05-14 02:27:59.731568 | orchestrator | Wednesday 14 May 2025 02:27:23 +0000 (0:00:00.340) 0:00:01.347 ********* 2025-05-14 02:27:59.731575 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:27:59.731582 | orchestrator | 2025-05-14 02:27:59.731588 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-05-14 02:27:59.731594 | orchestrator | Wednesday 14 May 2025 02:27:24 +0000 (0:00:00.785) 0:00:02.133 ********* 2025-05-14 02:27:59.731601 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-05-14 02:27:59.731608 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-05-14 02:27:59.731614 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-05-14 02:27:59.731621 | orchestrator | 2025-05-14 02:27:59.731627 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-05-14 02:27:59.731634 | orchestrator | Wednesday 14 May 2025 02:27:25 +0000 (0:00:00.852) 0:00:02.985 ********* 2025-05-14 02:27:59.731640 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-05-14 02:27:59.731646 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-05-14 02:27:59.731653 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-05-14 02:27:59.731659 | orchestrator | 2025-05-14 02:27:59.731666 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-05-14 02:27:59.731672 | orchestrator | Wednesday 14 May 2025 02:27:27 +0000 (0:00:01.868) 0:00:04.854 ********* 2025-05-14 02:27:59.731679 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:27:59.731686 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:27:59.731692 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:27:59.731698 | orchestrator | 2025-05-14 02:27:59.731704 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-05-14 02:27:59.731710 | orchestrator | Wednesday 14 May 2025 02:27:29 +0000 (0:00:02.407) 0:00:07.262 ********* 2025-05-14 02:27:59.731717 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:27:59.731724 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:27:59.731730 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:27:59.731737 | orchestrator | 2025-05-14 02:27:59.731743 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:27:59.731765 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:27:59.731773 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:27:59.731780 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:27:59.731786 | orchestrator | 2025-05-14 02:27:59.731792 | orchestrator | 2025-05-14 02:27:59.731798 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:27:59.731804 | orchestrator | Wednesday 14 May 2025 02:27:38 +0000 (0:00:08.956) 0:00:16.218 ********* 2025-05-14 02:27:59.731810 | orchestrator | =============================================================================== 2025-05-14 02:27:59.731855 | orchestrator | memcached : Restart memcached container --------------------------------- 8.96s 2025-05-14 02:27:59.731862 | orchestrator | memcached : Check memcached container ----------------------------------- 2.41s 2025-05-14 02:27:59.731869 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.87s 2025-05-14 02:27:59.731876 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.85s 2025-05-14 02:27:59.731882 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.79s 2025-05-14 02:27:59.731889 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.42s 2025-05-14 02:27:59.731896 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.34s 2025-05-14 02:27:59.731902 | orchestrator | 2025-05-14 02:27:59.731908 | orchestrator | 2025-05-14 02:27:59.731915 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:27:59.731922 | orchestrator | 2025-05-14 02:27:59.731928 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:27:59.731935 | orchestrator | Wednesday 14 May 2025 02:27:23 +0000 (0:00:00.372) 0:00:00.372 ********* 2025-05-14 02:27:59.731951 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:27:59.731958 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:27:59.731965 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:27:59.731972 | orchestrator | 2025-05-14 02:27:59.731978 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:27:59.731995 | orchestrator | Wednesday 14 May 2025 02:27:23 +0000 (0:00:00.486) 0:00:00.859 ********* 2025-05-14 02:27:59.732003 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-05-14 02:27:59.732010 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-05-14 02:27:59.732016 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-05-14 02:27:59.732023 | orchestrator | 2025-05-14 02:27:59.732030 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-05-14 02:27:59.732037 | orchestrator | 2025-05-14 02:27:59.732044 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-05-14 02:27:59.732051 | orchestrator | Wednesday 14 May 2025 02:27:23 +0000 (0:00:00.392) 0:00:01.251 ********* 2025-05-14 02:27:59.732057 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:27:59.732064 | orchestrator | 2025-05-14 02:27:59.732071 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-05-14 02:27:59.732077 | orchestrator | Wednesday 14 May 2025 02:27:25 +0000 (0:00:01.189) 0:00:02.441 ********* 2025-05-14 02:27:59.732086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-14 02:27:59.732103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-14 02:27:59.732111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-14 02:27:59.732119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-14 02:27:59.732127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-14 02:27:59.732142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-14 02:27:59.732147 | orchestrator | 2025-05-14 02:27:59.732152 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-05-14 02:27:59.732156 | orchestrator | Wednesday 14 May 2025 02:27:26 +0000 (0:00:01.611) 0:00:04.052 ********* 2025-05-14 02:27:59.732161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-14 02:27:59.732168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-14 02:27:59.732174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-14 02:27:59.732178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-14 02:27:59.732183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-14 02:27:59.732200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-14 02:27:59.732205 | orchestrator | 2025-05-14 02:27:59.732209 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-05-14 02:27:59.732213 | orchestrator | Wednesday 14 May 2025 02:27:29 +0000 (0:00:02.900) 0:00:06.953 ********* 2025-05-14 02:27:59.732218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-14 02:27:59.732226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-14 02:27:59.732230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-14 02:27:59.732235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-14 02:27:59.732239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-14 02:27:59.732247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-14 02:27:59.732252 | orchestrator | 2025-05-14 02:27:59.732256 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-05-14 02:27:59.732261 | orchestrator | Wednesday 14 May 2025 02:27:33 +0000 (0:00:03.864) 0:00:10.817 ********* 2025-05-14 02:27:59.732270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-14 02:27:59.732277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-14 02:27:59.732289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-14 02:27:59.732296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-14 02:27:59.732304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-14 02:27:59.732317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-14 02:27:59.732325 | orchestrator | 2025-05-14 02:27:59.732331 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-14 02:27:59.732343 | orchestrator | Wednesday 14 May 2025 02:27:36 +0000 (0:00:02.867) 0:00:13.684 ********* 2025-05-14 02:27:59.732350 | orchestrator | 2025-05-14 02:27:59.732357 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-14 02:27:59.732363 | orchestrator | Wednesday 14 May 2025 02:27:36 +0000 (0:00:00.143) 0:00:13.827 ********* 2025-05-14 02:27:59.732371 | orchestrator | 2025-05-14 02:27:59.732377 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-14 02:27:59.732384 | orchestrator | Wednesday 14 May 2025 02:27:36 +0000 (0:00:00.126) 0:00:13.953 ********* 2025-05-14 02:27:59.732389 | orchestrator | 2025-05-14 02:27:59.732393 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-05-14 02:27:59.732396 | orchestrator | Wednesday 14 May 2025 02:27:36 +0000 (0:00:00.095) 0:00:14.048 ********* 2025-05-14 02:27:59.732400 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:27:59.732404 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:27:59.732408 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:27:59.732413 | orchestrator | 2025-05-14 02:27:59.732420 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-05-14 02:27:59.732426 | orchestrator | Wednesday 14 May 2025 02:27:47 +0000 (0:00:10.284) 0:00:24.333 ********* 2025-05-14 02:27:59.732432 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:27:59.732438 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:27:59.732445 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:27:59.732451 | orchestrator | 2025-05-14 02:27:59.732457 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:27:59.732464 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:27:59.732471 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:27:59.732477 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:27:59.732484 | orchestrator | 2025-05-14 02:27:59.732490 | orchestrator | 2025-05-14 02:27:59.732497 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:27:59.732503 | orchestrator | Wednesday 14 May 2025 02:27:56 +0000 (0:00:09.830) 0:00:34.164 ********* 2025-05-14 02:27:59.732509 | orchestrator | =============================================================================== 2025-05-14 02:27:59.732516 | orchestrator | redis : Restart redis container ---------------------------------------- 10.28s 2025-05-14 02:27:59.732522 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 9.83s 2025-05-14 02:27:59.732528 | orchestrator | redis : Copying over redis config files --------------------------------- 3.86s 2025-05-14 02:27:59.732535 | orchestrator | redis : Copying over default config.json files -------------------------- 2.90s 2025-05-14 02:27:59.732541 | orchestrator | redis : Check redis containers ------------------------------------------ 2.87s 2025-05-14 02:27:59.732547 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.61s 2025-05-14 02:27:59.732553 | orchestrator | redis : include_tasks --------------------------------------------------- 1.19s 2025-05-14 02:27:59.732559 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.49s 2025-05-14 02:27:59.732565 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.39s 2025-05-14 02:27:59.732571 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.36s 2025-05-14 02:27:59.732751 | orchestrator | 2025-05-14 02:27:59 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:27:59.734510 | orchestrator | 2025-05-14 02:27:59 | INFO  | Task 5b333b43-92ca-43c0-b0d7-ba1dd9a30cca is in state STARTED 2025-05-14 02:27:59.738564 | orchestrator | 2025-05-14 02:27:59 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:27:59.740400 | orchestrator | 2025-05-14 02:27:59 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:27:59.741471 | orchestrator | 2025-05-14 02:27:59 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:27:59.741706 | orchestrator | 2025-05-14 02:27:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:28:02.770750 | orchestrator | 2025-05-14 02:28:02 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:28:02.771397 | orchestrator | 2025-05-14 02:28:02 | INFO  | Task 5b333b43-92ca-43c0-b0d7-ba1dd9a30cca is in state STARTED 2025-05-14 02:28:02.772518 | orchestrator | 2025-05-14 02:28:02 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:28:02.773767 | orchestrator | 2025-05-14 02:28:02 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:28:02.775187 | orchestrator | 2025-05-14 02:28:02 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:28:02.775233 | orchestrator | 2025-05-14 02:28:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:28:05.811176 | orchestrator | 2025-05-14 02:28:05 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:28:05.812394 | orchestrator | 2025-05-14 02:28:05 | INFO  | Task 5b333b43-92ca-43c0-b0d7-ba1dd9a30cca is in state STARTED 2025-05-14 02:28:05.814211 | orchestrator | 2025-05-14 02:28:05 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:28:05.816173 | orchestrator | 2025-05-14 02:28:05 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:28:05.817399 | orchestrator | 2025-05-14 02:28:05 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:28:05.817884 | orchestrator | 2025-05-14 02:28:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:28:08.855349 | orchestrator | 2025-05-14 02:28:08 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:28:08.855447 | orchestrator | 2025-05-14 02:28:08 | INFO  | Task 5b333b43-92ca-43c0-b0d7-ba1dd9a30cca is in state STARTED 2025-05-14 02:28:08.856350 | orchestrator | 2025-05-14 02:28:08 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:28:08.857318 | orchestrator | 2025-05-14 02:28:08 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:28:08.858140 | orchestrator | 2025-05-14 02:28:08 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:28:08.858163 | orchestrator | 2025-05-14 02:28:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:28:11.901678 | orchestrator | 2025-05-14 02:28:11 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:28:11.902107 | orchestrator | 2025-05-14 02:28:11 | INFO  | Task 5b333b43-92ca-43c0-b0d7-ba1dd9a30cca is in state STARTED 2025-05-14 02:28:11.902579 | orchestrator | 2025-05-14 02:28:11 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:28:11.904006 | orchestrator | 2025-05-14 02:28:11 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:28:11.904476 | orchestrator | 2025-05-14 02:28:11 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:28:11.904501 | orchestrator | 2025-05-14 02:28:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:28:14.954695 | orchestrator | 2025-05-14 02:28:14 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:28:14.954857 | orchestrator | 2025-05-14 02:28:14 | INFO  | Task 5b333b43-92ca-43c0-b0d7-ba1dd9a30cca is in state STARTED 2025-05-14 02:28:14.955261 | orchestrator | 2025-05-14 02:28:14 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:28:14.956005 | orchestrator | 2025-05-14 02:28:14 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:28:14.956426 | orchestrator | 2025-05-14 02:28:14 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:28:14.956589 | orchestrator | 2025-05-14 02:28:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:28:17.998888 | orchestrator | 2025-05-14 02:28:17 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:28:17.998979 | orchestrator | 2025-05-14 02:28:17 | INFO  | Task 5b333b43-92ca-43c0-b0d7-ba1dd9a30cca is in state STARTED 2025-05-14 02:28:17.999030 | orchestrator | 2025-05-14 02:28:17 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:28:17.999462 | orchestrator | 2025-05-14 02:28:17 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:28:18.000399 | orchestrator | 2025-05-14 02:28:17 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:28:18.000479 | orchestrator | 2025-05-14 02:28:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:28:21.046368 | orchestrator | 2025-05-14 02:28:21 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:28:21.046465 | orchestrator | 2025-05-14 02:28:21 | INFO  | Task 5b333b43-92ca-43c0-b0d7-ba1dd9a30cca is in state STARTED 2025-05-14 02:28:21.046498 | orchestrator | 2025-05-14 02:28:21 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:28:21.047317 | orchestrator | 2025-05-14 02:28:21 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:28:21.047351 | orchestrator | 2025-05-14 02:28:21 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:28:21.047360 | orchestrator | 2025-05-14 02:28:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:28:24.119681 | orchestrator | 2025-05-14 02:28:24 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:28:24.124626 | orchestrator | 2025-05-14 02:28:24 | INFO  | Task 5b333b43-92ca-43c0-b0d7-ba1dd9a30cca is in state STARTED 2025-05-14 02:28:24.126252 | orchestrator | 2025-05-14 02:28:24 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:28:24.127940 | orchestrator | 2025-05-14 02:28:24 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:28:24.129939 | orchestrator | 2025-05-14 02:28:24 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:28:24.129983 | orchestrator | 2025-05-14 02:28:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:28:27.183983 | orchestrator | 2025-05-14 02:28:27 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:28:27.184446 | orchestrator | 2025-05-14 02:28:27 | INFO  | Task 5b333b43-92ca-43c0-b0d7-ba1dd9a30cca is in state STARTED 2025-05-14 02:28:27.186372 | orchestrator | 2025-05-14 02:28:27 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:28:27.186400 | orchestrator | 2025-05-14 02:28:27 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:28:27.190175 | orchestrator | 2025-05-14 02:28:27 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:28:27.190262 | orchestrator | 2025-05-14 02:28:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:28:30.232515 | orchestrator | 2025-05-14 02:28:30 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:28:30.232954 | orchestrator | 2025-05-14 02:28:30 | INFO  | Task 5b333b43-92ca-43c0-b0d7-ba1dd9a30cca is in state STARTED 2025-05-14 02:28:30.234105 | orchestrator | 2025-05-14 02:28:30 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:28:30.234979 | orchestrator | 2025-05-14 02:28:30 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:28:30.237164 | orchestrator | 2025-05-14 02:28:30 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:28:30.237186 | orchestrator | 2025-05-14 02:28:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:28:33.285128 | orchestrator | 2025-05-14 02:28:33 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:28:33.285959 | orchestrator | 2025-05-14 02:28:33 | INFO  | Task 5b333b43-92ca-43c0-b0d7-ba1dd9a30cca is in state STARTED 2025-05-14 02:28:33.287205 | orchestrator | 2025-05-14 02:28:33 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:28:33.288117 | orchestrator | 2025-05-14 02:28:33 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:28:33.288994 | orchestrator | 2025-05-14 02:28:33 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:28:33.289034 | orchestrator | 2025-05-14 02:28:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:28:36.339178 | orchestrator | 2025-05-14 02:28:36 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:28:36.340308 | orchestrator | 2025-05-14 02:28:36 | INFO  | Task 5b333b43-92ca-43c0-b0d7-ba1dd9a30cca is in state STARTED 2025-05-14 02:28:36.341485 | orchestrator | 2025-05-14 02:28:36 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:28:36.342253 | orchestrator | 2025-05-14 02:28:36 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:28:36.342990 | orchestrator | 2025-05-14 02:28:36 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:28:36.343037 | orchestrator | 2025-05-14 02:28:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:28:39.383191 | orchestrator | 2025-05-14 02:28:39 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:28:39.384115 | orchestrator | 2025-05-14 02:28:39 | INFO  | Task 5b333b43-92ca-43c0-b0d7-ba1dd9a30cca is in state STARTED 2025-05-14 02:28:39.385776 | orchestrator | 2025-05-14 02:28:39 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:28:39.386997 | orchestrator | 2025-05-14 02:28:39 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:28:39.388456 | orchestrator | 2025-05-14 02:28:39 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:28:39.388507 | orchestrator | 2025-05-14 02:28:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:28:42.426425 | orchestrator | 2025-05-14 02:28:42 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:28:42.426559 | orchestrator | 2025-05-14 02:28:42 | INFO  | Task 5b333b43-92ca-43c0-b0d7-ba1dd9a30cca is in state STARTED 2025-05-14 02:28:42.428560 | orchestrator | 2025-05-14 02:28:42 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:28:42.428767 | orchestrator | 2025-05-14 02:28:42 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:28:42.429335 | orchestrator | 2025-05-14 02:28:42 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:28:42.429513 | orchestrator | 2025-05-14 02:28:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:28:45.466145 | orchestrator | 2025-05-14 02:28:45 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:28:45.474540 | orchestrator | 2025-05-14 02:28:45 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:28:45.474607 | orchestrator | 2025-05-14 02:28:45 | INFO  | Task 5b333b43-92ca-43c0-b0d7-ba1dd9a30cca is in state SUCCESS 2025-05-14 02:28:45.474620 | orchestrator | 2025-05-14 02:28:45 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:28:45.474632 | orchestrator | 2025-05-14 02:28:45 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:28:45.474643 | orchestrator | 2025-05-14 02:28:45 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:28:45.474654 | orchestrator | 2025-05-14 02:28:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:28:45.475970 | orchestrator | 2025-05-14 02:28:45.476014 | orchestrator | 2025-05-14 02:28:45.476026 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:28:45.476038 | orchestrator | 2025-05-14 02:28:45.476049 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:28:45.476060 | orchestrator | Wednesday 14 May 2025 02:27:23 +0000 (0:00:00.458) 0:00:00.458 ********* 2025-05-14 02:28:45.476071 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:28:45.476083 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:28:45.476093 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:28:45.476104 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:28:45.476114 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:28:45.476125 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:28:45.476135 | orchestrator | 2025-05-14 02:28:45.476146 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:28:45.476156 | orchestrator | Wednesday 14 May 2025 02:27:24 +0000 (0:00:00.946) 0:00:01.405 ********* 2025-05-14 02:28:45.476167 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-14 02:28:45.476178 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-14 02:28:45.476188 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-14 02:28:45.476199 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-14 02:28:45.476209 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-14 02:28:45.476219 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-14 02:28:45.476230 | orchestrator | 2025-05-14 02:28:45.476240 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-05-14 02:28:45.476251 | orchestrator | 2025-05-14 02:28:45.476261 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-05-14 02:28:45.476272 | orchestrator | Wednesday 14 May 2025 02:27:25 +0000 (0:00:01.008) 0:00:02.414 ********* 2025-05-14 02:28:45.476283 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:28:45.476295 | orchestrator | 2025-05-14 02:28:45.476306 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-14 02:28:45.476316 | orchestrator | Wednesday 14 May 2025 02:27:26 +0000 (0:00:01.818) 0:00:04.232 ********* 2025-05-14 02:28:45.476326 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-05-14 02:28:45.476338 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-05-14 02:28:45.476372 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-05-14 02:28:45.476383 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-05-14 02:28:45.476393 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-05-14 02:28:45.476403 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-05-14 02:28:45.476414 | orchestrator | 2025-05-14 02:28:45.476425 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-14 02:28:45.476450 | orchestrator | Wednesday 14 May 2025 02:27:28 +0000 (0:00:01.601) 0:00:05.834 ********* 2025-05-14 02:28:45.476461 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-05-14 02:28:45.476472 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-05-14 02:28:45.476482 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-05-14 02:28:45.476493 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-05-14 02:28:45.476503 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-05-14 02:28:45.476514 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-05-14 02:28:45.476524 | orchestrator | 2025-05-14 02:28:45.476535 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-14 02:28:45.476546 | orchestrator | Wednesday 14 May 2025 02:27:31 +0000 (0:00:02.718) 0:00:08.552 ********* 2025-05-14 02:28:45.476559 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-05-14 02:28:45.476580 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:28:45.476599 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-05-14 02:28:45.476618 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:28:45.476635 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-05-14 02:28:45.476652 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:28:45.476671 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-05-14 02:28:45.476690 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:28:45.476709 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-05-14 02:28:45.476728 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:28:45.476748 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-05-14 02:28:45.476768 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:28:45.476786 | orchestrator | 2025-05-14 02:28:45.476804 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-05-14 02:28:45.476817 | orchestrator | Wednesday 14 May 2025 02:27:34 +0000 (0:00:02.956) 0:00:11.509 ********* 2025-05-14 02:28:45.476869 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:28:45.476889 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:28:45.476909 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:28:45.476927 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:28:45.476944 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:28:45.476964 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:28:45.476981 | orchestrator | 2025-05-14 02:28:45.477000 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-05-14 02:28:45.477020 | orchestrator | Wednesday 14 May 2025 02:27:35 +0000 (0:00:01.207) 0:00:12.716 ********* 2025-05-14 02:28:45.477070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 02:28:45.477097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 02:28:45.477136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 02:28:45.477168 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 02:28:45.477190 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 02:28:45.477223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 02:28:45.477245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 02:28:45.477277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 02:28:45.477299 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 02:28:45.477310 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 02:28:45.477321 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 02:28:45.477340 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 02:28:45.477352 | orchestrator | 2025-05-14 02:28:45.477370 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-05-14 02:28:45.477382 | orchestrator | Wednesday 14 May 2025 02:27:38 +0000 (0:00:03.002) 0:00:15.719 ********* 2025-05-14 02:28:45.477394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 02:28:45.477405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 02:28:45.477422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 02:28:45.477433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 02:28:45.477444 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 02:28:45.477463 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 02:28:45.477481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 02:28:45.477493 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 02:28:45.477509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 02:28:45.477520 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 02:28:45.477537 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 02:28:45.477555 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 02:28:45.477566 | orchestrator | 2025-05-14 02:28:45.477577 | orchestrator | TASK [openvswitch : Copying over start-ovs file for openvswitch-vswitchd] ****** 2025-05-14 02:28:45.477588 | orchestrator | Wednesday 14 May 2025 02:27:42 +0000 (0:00:04.239) 0:00:19.959 ********* 2025-05-14 02:28:45.477599 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:28:45.477610 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:28:45.477620 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:28:45.477631 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:28:45.477641 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:28:45.477652 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:28:45.477662 | orchestrator | 2025-05-14 02:28:45.477673 | orchestrator | TASK [openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server] *** 2025-05-14 02:28:45.477683 | orchestrator | Wednesday 14 May 2025 02:27:45 +0000 (0:00:03.021) 0:00:22.980 ********* 2025-05-14 02:28:45.477694 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:28:45.477704 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:28:45.477715 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:28:45.477725 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:28:45.477736 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:28:45.477746 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:28:45.477757 | orchestrator | 2025-05-14 02:28:45.477768 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-05-14 02:28:45.477778 | orchestrator | Wednesday 14 May 2025 02:27:49 +0000 (0:00:04.047) 0:00:27.028 ********* 2025-05-14 02:28:45.477789 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:28:45.477799 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:28:45.477810 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:28:45.477841 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:28:45.477853 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:28:45.477863 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:28:45.477874 | orchestrator | 2025-05-14 02:28:45.477885 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-05-14 02:28:45.477896 | orchestrator | Wednesday 14 May 2025 02:27:51 +0000 (0:00:01.528) 0:00:28.557 ********* 2025-05-14 02:28:45.477908 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 02:28:45.477919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 02:28:45.477942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 02:28:45.477954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 02:28:45.477966 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 02:28:45.477998 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-14 02:28:45.478010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 02:28:45.478106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 02:28:45.478143 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 02:28:45.478163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 02:28:45.478202 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 02:28:45.478220 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-14 02:28:45.478232 | orchestrator | 2025-05-14 02:28:45.478243 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-14 02:28:45.478262 | orchestrator | Wednesday 14 May 2025 02:27:53 +0000 (0:00:02.711) 0:00:31.268 ********* 2025-05-14 02:28:45.478273 | orchestrator | 2025-05-14 02:28:45.478284 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-14 02:28:45.478295 | orchestrator | Wednesday 14 May 2025 02:27:54 +0000 (0:00:00.229) 0:00:31.498 ********* 2025-05-14 02:28:45.478305 | orchestrator | 2025-05-14 02:28:45.478316 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-14 02:28:45.478335 | orchestrator | Wednesday 14 May 2025 02:27:54 +0000 (0:00:00.340) 0:00:31.838 ********* 2025-05-14 02:28:45.478353 | orchestrator | 2025-05-14 02:28:45.478370 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-14 02:28:45.478387 | orchestrator | Wednesday 14 May 2025 02:27:54 +0000 (0:00:00.111) 0:00:31.950 ********* 2025-05-14 02:28:45.478404 | orchestrator | 2025-05-14 02:28:45.478422 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-14 02:28:45.478439 | orchestrator | Wednesday 14 May 2025 02:27:54 +0000 (0:00:00.214) 0:00:32.165 ********* 2025-05-14 02:28:45.478481 | orchestrator | 2025-05-14 02:28:45.478501 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-14 02:28:45.478519 | orchestrator | Wednesday 14 May 2025 02:27:54 +0000 (0:00:00.099) 0:00:32.265 ********* 2025-05-14 02:28:45.478536 | orchestrator | 2025-05-14 02:28:45.478547 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-05-14 02:28:45.478558 | orchestrator | Wednesday 14 May 2025 02:27:55 +0000 (0:00:00.256) 0:00:32.522 ********* 2025-05-14 02:28:45.478568 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:28:45.478579 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:28:45.478589 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:28:45.478600 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:28:45.478610 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:28:45.478621 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:28:45.478631 | orchestrator | 2025-05-14 02:28:45.478642 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-05-14 02:28:45.478653 | orchestrator | Wednesday 14 May 2025 02:28:06 +0000 (0:00:10.938) 0:00:43.460 ********* 2025-05-14 02:28:45.478674 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:28:45.478685 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:28:45.478696 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:28:45.478706 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:28:45.478717 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:28:45.478727 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:28:45.478738 | orchestrator | 2025-05-14 02:28:45.478748 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-05-14 02:28:45.478759 | orchestrator | Wednesday 14 May 2025 02:28:08 +0000 (0:00:02.666) 0:00:46.126 ********* 2025-05-14 02:28:45.478770 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:28:45.478780 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:28:45.478791 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:28:45.478801 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:28:45.478812 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:28:45.478939 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:28:45.478958 | orchestrator | 2025-05-14 02:28:45.478969 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-05-14 02:28:45.478980 | orchestrator | Wednesday 14 May 2025 02:28:19 +0000 (0:00:10.305) 0:00:56.432 ********* 2025-05-14 02:28:45.478991 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-05-14 02:28:45.479003 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-05-14 02:28:45.479013 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-05-14 02:28:45.479024 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-05-14 02:28:45.479045 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-05-14 02:28:45.479057 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-05-14 02:28:45.479067 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-05-14 02:28:45.479078 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-05-14 02:28:45.479089 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-05-14 02:28:45.479099 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-05-14 02:28:45.479110 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-05-14 02:28:45.479120 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-05-14 02:28:45.479138 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-14 02:28:45.479149 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-14 02:28:45.479160 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-14 02:28:45.479170 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-14 02:28:45.479181 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-14 02:28:45.479192 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-14 02:28:45.479201 | orchestrator | 2025-05-14 02:28:45.479216 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-05-14 02:28:45.479233 | orchestrator | Wednesday 14 May 2025 02:28:27 +0000 (0:00:07.997) 0:01:04.429 ********* 2025-05-14 02:28:45.479248 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-05-14 02:28:45.479265 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:28:45.479282 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-05-14 02:28:45.479299 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:28:45.479315 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-05-14 02:28:45.479329 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:28:45.479338 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-05-14 02:28:45.479348 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-05-14 02:28:45.479357 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-05-14 02:28:45.479367 | orchestrator | 2025-05-14 02:28:45.479376 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-05-14 02:28:45.479386 | orchestrator | Wednesday 14 May 2025 02:28:29 +0000 (0:00:02.433) 0:01:06.863 ********* 2025-05-14 02:28:45.479395 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-05-14 02:28:45.479405 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:28:45.479415 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-05-14 02:28:45.479424 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:28:45.479434 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-05-14 02:28:45.479443 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:28:45.479452 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-05-14 02:28:45.479470 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-05-14 02:28:45.479480 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-05-14 02:28:45.479500 | orchestrator | 2025-05-14 02:28:45.479510 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-05-14 02:28:45.479520 | orchestrator | Wednesday 14 May 2025 02:28:33 +0000 (0:00:03.948) 0:01:10.812 ********* 2025-05-14 02:28:45.479529 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:28:45.479539 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:28:45.479549 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:28:45.479558 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:28:45.479567 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:28:45.479577 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:28:45.479586 | orchestrator | 2025-05-14 02:28:45.479596 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:28:45.479606 | orchestrator | testbed-node-0 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 02:28:45.479618 | orchestrator | testbed-node-1 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 02:28:45.479627 | orchestrator | testbed-node-2 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 02:28:45.479637 | orchestrator | testbed-node-3 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-14 02:28:45.479646 | orchestrator | testbed-node-4 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-14 02:28:45.479656 | orchestrator | testbed-node-5 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-14 02:28:45.479665 | orchestrator | 2025-05-14 02:28:45.479675 | orchestrator | 2025-05-14 02:28:45.479684 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:28:45.479694 | orchestrator | Wednesday 14 May 2025 02:28:42 +0000 (0:00:08.709) 0:01:19.522 ********* 2025-05-14 02:28:45.479703 | orchestrator | =============================================================================== 2025-05-14 02:28:45.479713 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 19.01s 2025-05-14 02:28:45.479723 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.94s 2025-05-14 02:28:45.479732 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.00s 2025-05-14 02:28:45.479742 | orchestrator | openvswitch : Copying over config.json files for services --------------- 4.24s 2025-05-14 02:28:45.479756 | orchestrator | openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server --- 4.05s 2025-05-14 02:28:45.479766 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.95s 2025-05-14 02:28:45.479775 | orchestrator | openvswitch : Copying over start-ovs file for openvswitch-vswitchd ------ 3.02s 2025-05-14 02:28:45.479785 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 3.00s 2025-05-14 02:28:45.479794 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.96s 2025-05-14 02:28:45.479803 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.72s 2025-05-14 02:28:45.479813 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.71s 2025-05-14 02:28:45.479847 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.67s 2025-05-14 02:28:45.479857 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.43s 2025-05-14 02:28:45.479867 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.82s 2025-05-14 02:28:45.479876 | orchestrator | module-load : Load modules ---------------------------------------------- 1.60s 2025-05-14 02:28:45.479886 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.53s 2025-05-14 02:28:45.479895 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.25s 2025-05-14 02:28:45.479915 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.21s 2025-05-14 02:28:45.479925 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.01s 2025-05-14 02:28:45.479935 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.95s 2025-05-14 02:28:48.511392 | orchestrator | 2025-05-14 02:28:48 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:28:48.511523 | orchestrator | 2025-05-14 02:28:48 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:28:48.512539 | orchestrator | 2025-05-14 02:28:48 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:28:48.512954 | orchestrator | 2025-05-14 02:28:48 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:28:48.513661 | orchestrator | 2025-05-14 02:28:48 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:28:48.513685 | orchestrator | 2025-05-14 02:28:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:28:51.567141 | orchestrator | 2025-05-14 02:28:51 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:28:51.567520 | orchestrator | 2025-05-14 02:28:51 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:28:51.568137 | orchestrator | 2025-05-14 02:28:51 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:28:51.569676 | orchestrator | 2025-05-14 02:28:51 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:28:51.569707 | orchestrator | 2025-05-14 02:28:51 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:28:51.574208 | orchestrator | 2025-05-14 02:28:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:28:54.629898 | orchestrator | 2025-05-14 02:28:54 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:28:54.631036 | orchestrator | 2025-05-14 02:28:54 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:28:54.632398 | orchestrator | 2025-05-14 02:28:54 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:28:54.633378 | orchestrator | 2025-05-14 02:28:54 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:28:54.634206 | orchestrator | 2025-05-14 02:28:54 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:28:54.634357 | orchestrator | 2025-05-14 02:28:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:28:57.675133 | orchestrator | 2025-05-14 02:28:57 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:28:57.678342 | orchestrator | 2025-05-14 02:28:57 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:28:57.681531 | orchestrator | 2025-05-14 02:28:57 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:28:57.683700 | orchestrator | 2025-05-14 02:28:57 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:28:57.686157 | orchestrator | 2025-05-14 02:28:57 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:28:57.686183 | orchestrator | 2025-05-14 02:28:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:29:00.728130 | orchestrator | 2025-05-14 02:29:00 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:29:00.728275 | orchestrator | 2025-05-14 02:29:00 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:29:00.728472 | orchestrator | 2025-05-14 02:29:00 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:29:00.729473 | orchestrator | 2025-05-14 02:29:00 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:29:00.730159 | orchestrator | 2025-05-14 02:29:00 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:29:00.730188 | orchestrator | 2025-05-14 02:29:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:29:03.766972 | orchestrator | 2025-05-14 02:29:03 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:29:03.767547 | orchestrator | 2025-05-14 02:29:03 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:29:03.768544 | orchestrator | 2025-05-14 02:29:03 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:29:03.771937 | orchestrator | 2025-05-14 02:29:03 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:29:03.771982 | orchestrator | 2025-05-14 02:29:03 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:29:03.771999 | orchestrator | 2025-05-14 02:29:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:29:06.815589 | orchestrator | 2025-05-14 02:29:06 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:29:06.818083 | orchestrator | 2025-05-14 02:29:06 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:29:06.819510 | orchestrator | 2025-05-14 02:29:06 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:29:06.821522 | orchestrator | 2025-05-14 02:29:06 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:29:06.823443 | orchestrator | 2025-05-14 02:29:06 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:29:06.823483 | orchestrator | 2025-05-14 02:29:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:29:09.860515 | orchestrator | 2025-05-14 02:29:09 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:29:09.860652 | orchestrator | 2025-05-14 02:29:09 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:29:09.860678 | orchestrator | 2025-05-14 02:29:09 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:29:09.860696 | orchestrator | 2025-05-14 02:29:09 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:29:09.860984 | orchestrator | 2025-05-14 02:29:09 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:29:09.861011 | orchestrator | 2025-05-14 02:29:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:29:12.909154 | orchestrator | 2025-05-14 02:29:12 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:29:12.909300 | orchestrator | 2025-05-14 02:29:12 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:29:12.909317 | orchestrator | 2025-05-14 02:29:12 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:29:12.909329 | orchestrator | 2025-05-14 02:29:12 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:29:12.909430 | orchestrator | 2025-05-14 02:29:12 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:29:12.909445 | orchestrator | 2025-05-14 02:29:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:29:15.942292 | orchestrator | 2025-05-14 02:29:15 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:29:15.942416 | orchestrator | 2025-05-14 02:29:15 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:29:15.942441 | orchestrator | 2025-05-14 02:29:15 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:29:15.943072 | orchestrator | 2025-05-14 02:29:15 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:29:15.943501 | orchestrator | 2025-05-14 02:29:15 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:29:15.943538 | orchestrator | 2025-05-14 02:29:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:29:18.997434 | orchestrator | 2025-05-14 02:29:18 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:29:18.998708 | orchestrator | 2025-05-14 02:29:18 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:29:19.001174 | orchestrator | 2025-05-14 02:29:18 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:29:19.005560 | orchestrator | 2025-05-14 02:29:19 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:29:19.009291 | orchestrator | 2025-05-14 02:29:19 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:29:19.009472 | orchestrator | 2025-05-14 02:29:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:29:22.044805 | orchestrator | 2025-05-14 02:29:22 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:29:22.047200 | orchestrator | 2025-05-14 02:29:22 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:29:22.048164 | orchestrator | 2025-05-14 02:29:22 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:29:22.049939 | orchestrator | 2025-05-14 02:29:22 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:29:22.050924 | orchestrator | 2025-05-14 02:29:22 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:29:22.052074 | orchestrator | 2025-05-14 02:29:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:29:25.106490 | orchestrator | 2025-05-14 02:29:25 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:29:25.109179 | orchestrator | 2025-05-14 02:29:25 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:29:25.109239 | orchestrator | 2025-05-14 02:29:25 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:29:25.110363 | orchestrator | 2025-05-14 02:29:25 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:29:25.112919 | orchestrator | 2025-05-14 02:29:25 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:29:25.112944 | orchestrator | 2025-05-14 02:29:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:29:28.163386 | orchestrator | 2025-05-14 02:29:28 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:29:28.164508 | orchestrator | 2025-05-14 02:29:28 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:29:28.164571 | orchestrator | 2025-05-14 02:29:28 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:29:28.165584 | orchestrator | 2025-05-14 02:29:28 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:29:28.166489 | orchestrator | 2025-05-14 02:29:28 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:29:28.166516 | orchestrator | 2025-05-14 02:29:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:29:31.231398 | orchestrator | 2025-05-14 02:29:31 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:29:31.231509 | orchestrator | 2025-05-14 02:29:31 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:29:31.231527 | orchestrator | 2025-05-14 02:29:31 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:29:31.232155 | orchestrator | 2025-05-14 02:29:31 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:29:31.235719 | orchestrator | 2025-05-14 02:29:31 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:29:31.235877 | orchestrator | 2025-05-14 02:29:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:29:34.276554 | orchestrator | 2025-05-14 02:29:34 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:29:34.278610 | orchestrator | 2025-05-14 02:29:34 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:29:34.280351 | orchestrator | 2025-05-14 02:29:34 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:29:34.282517 | orchestrator | 2025-05-14 02:29:34 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:29:34.284201 | orchestrator | 2025-05-14 02:29:34 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:29:34.284262 | orchestrator | 2025-05-14 02:29:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:29:37.328357 | orchestrator | 2025-05-14 02:29:37 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:29:37.328885 | orchestrator | 2025-05-14 02:29:37 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:29:37.329395 | orchestrator | 2025-05-14 02:29:37 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:29:37.330502 | orchestrator | 2025-05-14 02:29:37 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:29:37.331200 | orchestrator | 2025-05-14 02:29:37 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:29:37.331293 | orchestrator | 2025-05-14 02:29:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:29:40.392109 | orchestrator | 2025-05-14 02:29:40 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:29:40.392206 | orchestrator | 2025-05-14 02:29:40 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:29:40.399175 | orchestrator | 2025-05-14 02:29:40 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:29:40.399266 | orchestrator | 2025-05-14 02:29:40 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:29:40.399280 | orchestrator | 2025-05-14 02:29:40 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:29:40.399303 | orchestrator | 2025-05-14 02:29:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:29:43.446669 | orchestrator | 2025-05-14 02:29:43 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:29:43.448309 | orchestrator | 2025-05-14 02:29:43 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:29:43.450280 | orchestrator | 2025-05-14 02:29:43 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:29:43.452841 | orchestrator | 2025-05-14 02:29:43 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:29:43.455216 | orchestrator | 2025-05-14 02:29:43 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:29:43.455254 | orchestrator | 2025-05-14 02:29:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:29:46.498868 | orchestrator | 2025-05-14 02:29:46 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:29:46.501310 | orchestrator | 2025-05-14 02:29:46 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:29:46.502072 | orchestrator | 2025-05-14 02:29:46 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:29:46.503194 | orchestrator | 2025-05-14 02:29:46 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:29:46.504217 | orchestrator | 2025-05-14 02:29:46 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:29:46.504635 | orchestrator | 2025-05-14 02:29:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:29:49.551013 | orchestrator | 2025-05-14 02:29:49 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:29:49.551848 | orchestrator | 2025-05-14 02:29:49 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:29:49.552962 | orchestrator | 2025-05-14 02:29:49 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:29:49.553926 | orchestrator | 2025-05-14 02:29:49 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:29:49.554873 | orchestrator | 2025-05-14 02:29:49 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:29:49.554898 | orchestrator | 2025-05-14 02:29:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:29:52.603090 | orchestrator | 2025-05-14 02:29:52 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:29:52.603499 | orchestrator | 2025-05-14 02:29:52 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:29:52.604280 | orchestrator | 2025-05-14 02:29:52 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:29:52.607719 | orchestrator | 2025-05-14 02:29:52 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:29:52.611479 | orchestrator | 2025-05-14 02:29:52 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:29:52.611540 | orchestrator | 2025-05-14 02:29:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:29:55.642347 | orchestrator | 2025-05-14 02:29:55 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:29:55.642966 | orchestrator | 2025-05-14 02:29:55 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:29:55.644043 | orchestrator | 2025-05-14 02:29:55 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:29:55.645217 | orchestrator | 2025-05-14 02:29:55 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:29:55.646504 | orchestrator | 2025-05-14 02:29:55 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:29:55.646661 | orchestrator | 2025-05-14 02:29:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:29:58.682329 | orchestrator | 2025-05-14 02:29:58 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:29:58.684197 | orchestrator | 2025-05-14 02:29:58 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:29:58.684849 | orchestrator | 2025-05-14 02:29:58 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:29:58.688169 | orchestrator | 2025-05-14 02:29:58 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:29:58.691990 | orchestrator | 2025-05-14 02:29:58 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:29:58.692038 | orchestrator | 2025-05-14 02:29:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:30:01.733000 | orchestrator | 2025-05-14 02:30:01 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:30:01.734218 | orchestrator | 2025-05-14 02:30:01 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:30:01.734710 | orchestrator | 2025-05-14 02:30:01 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:30:01.736616 | orchestrator | 2025-05-14 02:30:01 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:30:01.736983 | orchestrator | 2025-05-14 02:30:01 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:30:01.737019 | orchestrator | 2025-05-14 02:30:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:30:04.780871 | orchestrator | 2025-05-14 02:30:04 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:30:04.781422 | orchestrator | 2025-05-14 02:30:04 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:30:04.782482 | orchestrator | 2025-05-14 02:30:04 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:30:04.785352 | orchestrator | 2025-05-14 02:30:04 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state STARTED 2025-05-14 02:30:04.786487 | orchestrator | 2025-05-14 02:30:04 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:30:04.786581 | orchestrator | 2025-05-14 02:30:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:30:07.828031 | orchestrator | 2025-05-14 02:30:07 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:30:07.828149 | orchestrator | 2025-05-14 02:30:07 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:30:07.829403 | orchestrator | 2025-05-14 02:30:07 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:30:07.831625 | orchestrator | 2025-05-14 02:30:07.831675 | orchestrator | 2025-05-14 02:30:07.831698 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-05-14 02:30:07.831720 | orchestrator | 2025-05-14 02:30:07.831768 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-05-14 02:30:07.831783 | orchestrator | Wednesday 14 May 2025 02:27:45 +0000 (0:00:00.325) 0:00:00.325 ********* 2025-05-14 02:30:07.831794 | orchestrator | ok: [localhost] => { 2025-05-14 02:30:07.831807 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-05-14 02:30:07.831818 | orchestrator | } 2025-05-14 02:30:07.831829 | orchestrator | 2025-05-14 02:30:07.831840 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-05-14 02:30:07.831851 | orchestrator | Wednesday 14 May 2025 02:27:45 +0000 (0:00:00.118) 0:00:00.443 ********* 2025-05-14 02:30:07.831864 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-05-14 02:30:07.831901 | orchestrator | ...ignoring 2025-05-14 02:30:07.831912 | orchestrator | 2025-05-14 02:30:07.831938 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-05-14 02:30:07.831949 | orchestrator | Wednesday 14 May 2025 02:27:48 +0000 (0:00:03.115) 0:00:03.559 ********* 2025-05-14 02:30:07.831960 | orchestrator | skipping: [localhost] 2025-05-14 02:30:07.831971 | orchestrator | 2025-05-14 02:30:07.831981 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-05-14 02:30:07.831992 | orchestrator | Wednesday 14 May 2025 02:27:48 +0000 (0:00:00.184) 0:00:03.744 ********* 2025-05-14 02:30:07.832003 | orchestrator | ok: [localhost] 2025-05-14 02:30:07.832014 | orchestrator | 2025-05-14 02:30:07.832024 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:30:07.832035 | orchestrator | 2025-05-14 02:30:07.832046 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:30:07.832057 | orchestrator | Wednesday 14 May 2025 02:27:49 +0000 (0:00:00.688) 0:00:04.433 ********* 2025-05-14 02:30:07.832067 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:30:07.832078 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:30:07.832089 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:30:07.832099 | orchestrator | 2025-05-14 02:30:07.832110 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:30:07.832121 | orchestrator | Wednesday 14 May 2025 02:27:50 +0000 (0:00:00.914) 0:00:05.347 ********* 2025-05-14 02:30:07.832131 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-05-14 02:30:07.832143 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-05-14 02:30:07.832153 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-05-14 02:30:07.832164 | orchestrator | 2025-05-14 02:30:07.832174 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-05-14 02:30:07.832185 | orchestrator | 2025-05-14 02:30:07.832196 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-14 02:30:07.832207 | orchestrator | Wednesday 14 May 2025 02:27:50 +0000 (0:00:00.662) 0:00:06.010 ********* 2025-05-14 02:30:07.832218 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:30:07.832229 | orchestrator | 2025-05-14 02:30:07.832240 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-05-14 02:30:07.832251 | orchestrator | Wednesday 14 May 2025 02:27:52 +0000 (0:00:01.360) 0:00:07.370 ********* 2025-05-14 02:30:07.832262 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:30:07.832272 | orchestrator | 2025-05-14 02:30:07.832283 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-05-14 02:30:07.832293 | orchestrator | Wednesday 14 May 2025 02:27:53 +0000 (0:00:01.070) 0:00:08.440 ********* 2025-05-14 02:30:07.832304 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:30:07.832315 | orchestrator | 2025-05-14 02:30:07.832326 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-05-14 02:30:07.832337 | orchestrator | Wednesday 14 May 2025 02:27:53 +0000 (0:00:00.327) 0:00:08.768 ********* 2025-05-14 02:30:07.832347 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:30:07.832358 | orchestrator | 2025-05-14 02:30:07.832368 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-05-14 02:30:07.832379 | orchestrator | Wednesday 14 May 2025 02:27:54 +0000 (0:00:00.574) 0:00:09.343 ********* 2025-05-14 02:30:07.832390 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:30:07.832400 | orchestrator | 2025-05-14 02:30:07.832431 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-05-14 02:30:07.832443 | orchestrator | Wednesday 14 May 2025 02:27:54 +0000 (0:00:00.420) 0:00:09.764 ********* 2025-05-14 02:30:07.832454 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:30:07.832464 | orchestrator | 2025-05-14 02:30:07.832475 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-14 02:30:07.832485 | orchestrator | Wednesday 14 May 2025 02:27:55 +0000 (0:00:00.396) 0:00:10.160 ********* 2025-05-14 02:30:07.832504 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:30:07.832515 | orchestrator | 2025-05-14 02:30:07.832525 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-05-14 02:30:07.832536 | orchestrator | Wednesday 14 May 2025 02:27:57 +0000 (0:00:01.897) 0:00:12.058 ********* 2025-05-14 02:30:07.832546 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:30:07.832557 | orchestrator | 2025-05-14 02:30:07.832567 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-05-14 02:30:07.832578 | orchestrator | Wednesday 14 May 2025 02:27:58 +0000 (0:00:01.133) 0:00:13.191 ********* 2025-05-14 02:30:07.832589 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:30:07.832600 | orchestrator | 2025-05-14 02:30:07.832610 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-05-14 02:30:07.832621 | orchestrator | Wednesday 14 May 2025 02:27:58 +0000 (0:00:00.366) 0:00:13.558 ********* 2025-05-14 02:30:07.832683 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:30:07.832704 | orchestrator | 2025-05-14 02:30:07.832738 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-05-14 02:30:07.832820 | orchestrator | Wednesday 14 May 2025 02:27:58 +0000 (0:00:00.397) 0:00:13.955 ********* 2025-05-14 02:30:07.832844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-14 02:30:07.832862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-14 02:30:07.832875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-14 02:30:07.832897 | orchestrator | 2025-05-14 02:30:07.832908 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-05-14 02:30:07.832919 | orchestrator | Wednesday 14 May 2025 02:27:59 +0000 (0:00:01.029) 0:00:14.984 ********* 2025-05-14 02:30:07.832941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-14 02:30:07.832959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-14 02:30:07.832972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-14 02:30:07.832991 | orchestrator | 2025-05-14 02:30:07.833002 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-05-14 02:30:07.833013 | orchestrator | Wednesday 14 May 2025 02:28:01 +0000 (0:00:01.724) 0:00:16.708 ********* 2025-05-14 02:30:07.833023 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-14 02:30:07.833034 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-14 02:30:07.833045 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-14 02:30:07.833056 | orchestrator | 2025-05-14 02:30:07.833067 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-05-14 02:30:07.833078 | orchestrator | Wednesday 14 May 2025 02:28:03 +0000 (0:00:01.522) 0:00:18.231 ********* 2025-05-14 02:30:07.833088 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-14 02:30:07.833099 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-14 02:30:07.833109 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-14 02:30:07.833120 | orchestrator | 2025-05-14 02:30:07.833131 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-05-14 02:30:07.833142 | orchestrator | Wednesday 14 May 2025 02:28:05 +0000 (0:00:02.081) 0:00:20.313 ********* 2025-05-14 02:30:07.833152 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-14 02:30:07.833163 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-14 02:30:07.833173 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-14 02:30:07.833184 | orchestrator | 2025-05-14 02:30:07.833201 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-05-14 02:30:07.833212 | orchestrator | Wednesday 14 May 2025 02:28:07 +0000 (0:00:01.740) 0:00:22.053 ********* 2025-05-14 02:30:07.833223 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-14 02:30:07.833233 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-14 02:30:07.833244 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-14 02:30:07.833255 | orchestrator | 2025-05-14 02:30:07.833266 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-05-14 02:30:07.833276 | orchestrator | Wednesday 14 May 2025 02:28:10 +0000 (0:00:03.195) 0:00:25.249 ********* 2025-05-14 02:30:07.833287 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-14 02:30:07.833297 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-14 02:30:07.833308 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-14 02:30:07.833319 | orchestrator | 2025-05-14 02:30:07.833330 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-05-14 02:30:07.833341 | orchestrator | Wednesday 14 May 2025 02:28:12 +0000 (0:00:02.641) 0:00:27.891 ********* 2025-05-14 02:30:07.833351 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-14 02:30:07.833362 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-14 02:30:07.833373 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-14 02:30:07.833384 | orchestrator | 2025-05-14 02:30:07.833394 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-14 02:30:07.833405 | orchestrator | Wednesday 14 May 2025 02:28:14 +0000 (0:00:01.810) 0:00:29.702 ********* 2025-05-14 02:30:07.833422 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:30:07.833433 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:30:07.833444 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:30:07.833454 | orchestrator | 2025-05-14 02:30:07.833465 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-05-14 02:30:07.833476 | orchestrator | Wednesday 14 May 2025 02:28:15 +0000 (0:00:00.530) 0:00:30.233 ********* 2025-05-14 02:30:07.833487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-14 02:30:07.833499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-14 02:30:07.834352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-14 02:30:07.834411 | orchestrator | 2025-05-14 02:30:07.834426 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-05-14 02:30:07.834440 | orchestrator | Wednesday 14 May 2025 02:28:16 +0000 (0:00:01.449) 0:00:31.682 ********* 2025-05-14 02:30:07.834453 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:30:07.834480 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:30:07.834491 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:30:07.834501 | orchestrator | 2025-05-14 02:30:07.834512 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-05-14 02:30:07.834523 | orchestrator | Wednesday 14 May 2025 02:28:17 +0000 (0:00:01.017) 0:00:32.699 ********* 2025-05-14 02:30:07.834533 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:30:07.834544 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:30:07.834555 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:30:07.834565 | orchestrator | 2025-05-14 02:30:07.834576 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-05-14 02:30:07.834586 | orchestrator | Wednesday 14 May 2025 02:28:26 +0000 (0:00:08.546) 0:00:41.245 ********* 2025-05-14 02:30:07.834597 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:30:07.834607 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:30:07.834618 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:30:07.834628 | orchestrator | 2025-05-14 02:30:07.834639 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-14 02:30:07.834650 | orchestrator | 2025-05-14 02:30:07.834661 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-14 02:30:07.834671 | orchestrator | Wednesday 14 May 2025 02:28:26 +0000 (0:00:00.327) 0:00:41.573 ********* 2025-05-14 02:30:07.834682 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:30:07.834693 | orchestrator | 2025-05-14 02:30:07.834712 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-14 02:30:07.834723 | orchestrator | Wednesday 14 May 2025 02:28:27 +0000 (0:00:00.860) 0:00:42.433 ********* 2025-05-14 02:30:07.834733 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:30:07.834814 | orchestrator | 2025-05-14 02:30:07.834835 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-14 02:30:07.834853 | orchestrator | Wednesday 14 May 2025 02:28:27 +0000 (0:00:00.239) 0:00:42.673 ********* 2025-05-14 02:30:07.834868 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:30:07.834879 | orchestrator | 2025-05-14 02:30:07.834890 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-14 02:30:07.834901 | orchestrator | Wednesday 14 May 2025 02:28:29 +0000 (0:00:01.916) 0:00:44.589 ********* 2025-05-14 02:30:07.834911 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:30:07.834922 | orchestrator | 2025-05-14 02:30:07.834932 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-14 02:30:07.834943 | orchestrator | 2025-05-14 02:30:07.834953 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-14 02:30:07.834964 | orchestrator | Wednesday 14 May 2025 02:29:26 +0000 (0:00:57.011) 0:01:41.601 ********* 2025-05-14 02:30:07.834975 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:30:07.834986 | orchestrator | 2025-05-14 02:30:07.834996 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-14 02:30:07.835007 | orchestrator | Wednesday 14 May 2025 02:29:27 +0000 (0:00:01.329) 0:01:42.930 ********* 2025-05-14 02:30:07.835018 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:30:07.835028 | orchestrator | 2025-05-14 02:30:07.835039 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-14 02:30:07.835050 | orchestrator | Wednesday 14 May 2025 02:29:28 +0000 (0:00:00.279) 0:01:43.210 ********* 2025-05-14 02:30:07.835060 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:30:07.835071 | orchestrator | 2025-05-14 02:30:07.835081 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-14 02:30:07.835092 | orchestrator | Wednesday 14 May 2025 02:29:35 +0000 (0:00:06.838) 0:01:50.048 ********* 2025-05-14 02:30:07.835103 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:30:07.835113 | orchestrator | 2025-05-14 02:30:07.835124 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-14 02:30:07.835134 | orchestrator | 2025-05-14 02:30:07.835145 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-14 02:30:07.835165 | orchestrator | Wednesday 14 May 2025 02:29:45 +0000 (0:00:10.430) 0:02:00.479 ********* 2025-05-14 02:30:07.835175 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:30:07.835186 | orchestrator | 2025-05-14 02:30:07.835196 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-14 02:30:07.835207 | orchestrator | Wednesday 14 May 2025 02:29:46 +0000 (0:00:00.663) 0:02:01.143 ********* 2025-05-14 02:30:07.835218 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:30:07.835228 | orchestrator | 2025-05-14 02:30:07.835239 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-14 02:30:07.835261 | orchestrator | Wednesday 14 May 2025 02:29:46 +0000 (0:00:00.368) 0:02:01.511 ********* 2025-05-14 02:30:07.835273 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:30:07.835283 | orchestrator | 2025-05-14 02:30:07.835294 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-14 02:30:07.835305 | orchestrator | Wednesday 14 May 2025 02:29:48 +0000 (0:00:02.114) 0:02:03.625 ********* 2025-05-14 02:30:07.835315 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:30:07.835325 | orchestrator | 2025-05-14 02:30:07.835335 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-05-14 02:30:07.835345 | orchestrator | 2025-05-14 02:30:07.835354 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-05-14 02:30:07.835364 | orchestrator | Wednesday 14 May 2025 02:30:03 +0000 (0:00:14.792) 0:02:18.418 ********* 2025-05-14 02:30:07.835374 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:30:07.835383 | orchestrator | 2025-05-14 02:30:07.835392 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-05-14 02:30:07.835402 | orchestrator | Wednesday 14 May 2025 02:30:04 +0000 (0:00:00.918) 0:02:19.336 ********* 2025-05-14 02:30:07.835412 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-14 02:30:07.835422 | orchestrator | enable_outward_rabbitmq_True 2025-05-14 02:30:07.835431 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-14 02:30:07.835441 | orchestrator | outward_rabbitmq_restart 2025-05-14 02:30:07.835450 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:30:07.835460 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:30:07.835470 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:30:07.835479 | orchestrator | 2025-05-14 02:30:07.835488 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-05-14 02:30:07.835498 | orchestrator | skipping: no hosts matched 2025-05-14 02:30:07.835560 | orchestrator | 2025-05-14 02:30:07.835571 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-05-14 02:30:07.835580 | orchestrator | skipping: no hosts matched 2025-05-14 02:30:07.835589 | orchestrator | 2025-05-14 02:30:07.835599 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-05-14 02:30:07.835608 | orchestrator | skipping: no hosts matched 2025-05-14 02:30:07.835618 | orchestrator | 2025-05-14 02:30:07.835627 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:30:07.835638 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-05-14 02:30:07.835648 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-14 02:30:07.835658 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:30:07.835673 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:30:07.835683 | orchestrator | 2025-05-14 02:30:07.835693 | orchestrator | 2025-05-14 02:30:07.835703 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:30:07.835721 | orchestrator | Wednesday 14 May 2025 02:30:07 +0000 (0:00:02.787) 0:02:22.124 ********* 2025-05-14 02:30:07.835731 | orchestrator | =============================================================================== 2025-05-14 02:30:07.835762 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 82.24s 2025-05-14 02:30:07.835774 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.87s 2025-05-14 02:30:07.835783 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 8.55s 2025-05-14 02:30:07.835793 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 3.20s 2025-05-14 02:30:07.835802 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.12s 2025-05-14 02:30:07.835812 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.85s 2025-05-14 02:30:07.835821 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.79s 2025-05-14 02:30:07.835831 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.64s 2025-05-14 02:30:07.835840 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.08s 2025-05-14 02:30:07.835850 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.90s 2025-05-14 02:30:07.835859 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.81s 2025-05-14 02:30:07.835868 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.74s 2025-05-14 02:30:07.835878 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.72s 2025-05-14 02:30:07.835887 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.52s 2025-05-14 02:30:07.835897 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.45s 2025-05-14 02:30:07.835906 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.36s 2025-05-14 02:30:07.835916 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.13s 2025-05-14 02:30:07.835925 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.07s 2025-05-14 02:30:07.835934 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.03s 2025-05-14 02:30:07.835944 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.02s 2025-05-14 02:30:07.835961 | orchestrator | 2025-05-14 02:30:07 | INFO  | Task 4c6c90b9-2e40-4944-9fd8-e4d557771eaa is in state SUCCESS 2025-05-14 02:30:07.835971 | orchestrator | 2025-05-14 02:30:07 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:30:07.835981 | orchestrator | 2025-05-14 02:30:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:30:10.874249 | orchestrator | 2025-05-14 02:30:10 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:30:10.874367 | orchestrator | 2025-05-14 02:30:10 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:30:10.875039 | orchestrator | 2025-05-14 02:30:10 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:30:10.878576 | orchestrator | 2025-05-14 02:30:10 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:30:10.878620 | orchestrator | 2025-05-14 02:30:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:30:13.937439 | orchestrator | 2025-05-14 02:30:13 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:30:13.941310 | orchestrator | 2025-05-14 02:30:13 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:30:13.941958 | orchestrator | 2025-05-14 02:30:13 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:30:13.943077 | orchestrator | 2025-05-14 02:30:13 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:30:13.943153 | orchestrator | 2025-05-14 02:30:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:30:16.993971 | orchestrator | 2025-05-14 02:30:16 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:30:16.999131 | orchestrator | 2025-05-14 02:30:16 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:30:17.001113 | orchestrator | 2025-05-14 02:30:16 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:30:17.004228 | orchestrator | 2025-05-14 02:30:17 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:30:17.004300 | orchestrator | 2025-05-14 02:30:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:30:20.053697 | orchestrator | 2025-05-14 02:30:20 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:30:20.056075 | orchestrator | 2025-05-14 02:30:20 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:30:20.058611 | orchestrator | 2025-05-14 02:30:20 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:30:20.060611 | orchestrator | 2025-05-14 02:30:20 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:30:20.060692 | orchestrator | 2025-05-14 02:30:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:30:23.099106 | orchestrator | 2025-05-14 02:30:23 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:30:23.099231 | orchestrator | 2025-05-14 02:30:23 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:30:23.099545 | orchestrator | 2025-05-14 02:30:23 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:30:23.101029 | orchestrator | 2025-05-14 02:30:23 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:30:23.101083 | orchestrator | 2025-05-14 02:30:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:30:26.155294 | orchestrator | 2025-05-14 02:30:26 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:30:26.158443 | orchestrator | 2025-05-14 02:30:26 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:30:26.160584 | orchestrator | 2025-05-14 02:30:26 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:30:26.162113 | orchestrator | 2025-05-14 02:30:26 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:30:26.162361 | orchestrator | 2025-05-14 02:30:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:30:29.218605 | orchestrator | 2025-05-14 02:30:29 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:30:29.218807 | orchestrator | 2025-05-14 02:30:29 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:30:29.218824 | orchestrator | 2025-05-14 02:30:29 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:30:29.219943 | orchestrator | 2025-05-14 02:30:29 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:30:29.219971 | orchestrator | 2025-05-14 02:30:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:30:32.282609 | orchestrator | 2025-05-14 02:30:32 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:30:32.282810 | orchestrator | 2025-05-14 02:30:32 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:30:32.283102 | orchestrator | 2025-05-14 02:30:32 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:30:32.286630 | orchestrator | 2025-05-14 02:30:32 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:30:32.286669 | orchestrator | 2025-05-14 02:30:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:30:35.317914 | orchestrator | 2025-05-14 02:30:35 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:30:35.318528 | orchestrator | 2025-05-14 02:30:35 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:30:35.319688 | orchestrator | 2025-05-14 02:30:35 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:30:35.320683 | orchestrator | 2025-05-14 02:30:35 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:30:35.320749 | orchestrator | 2025-05-14 02:30:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:30:38.366615 | orchestrator | 2025-05-14 02:30:38 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:30:38.371053 | orchestrator | 2025-05-14 02:30:38 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:30:38.373892 | orchestrator | 2025-05-14 02:30:38 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:30:38.375294 | orchestrator | 2025-05-14 02:30:38 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:30:38.375334 | orchestrator | 2025-05-14 02:30:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:30:41.413057 | orchestrator | 2025-05-14 02:30:41 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:30:41.413148 | orchestrator | 2025-05-14 02:30:41 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:30:41.413312 | orchestrator | 2025-05-14 02:30:41 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:30:41.416363 | orchestrator | 2025-05-14 02:30:41 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:30:41.416411 | orchestrator | 2025-05-14 02:30:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:30:44.459763 | orchestrator | 2025-05-14 02:30:44 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:30:44.460905 | orchestrator | 2025-05-14 02:30:44 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:30:44.463115 | orchestrator | 2025-05-14 02:30:44 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:30:44.465131 | orchestrator | 2025-05-14 02:30:44 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:30:44.465176 | orchestrator | 2025-05-14 02:30:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:30:47.511816 | orchestrator | 2025-05-14 02:30:47 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:30:47.511916 | orchestrator | 2025-05-14 02:30:47 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:30:47.513532 | orchestrator | 2025-05-14 02:30:47 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:30:47.515026 | orchestrator | 2025-05-14 02:30:47 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:30:47.515243 | orchestrator | 2025-05-14 02:30:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:30:50.566545 | orchestrator | 2025-05-14 02:30:50 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:30:50.566873 | orchestrator | 2025-05-14 02:30:50 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:30:50.567359 | orchestrator | 2025-05-14 02:30:50 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:30:50.568369 | orchestrator | 2025-05-14 02:30:50 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:30:50.568418 | orchestrator | 2025-05-14 02:30:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:30:53.609688 | orchestrator | 2025-05-14 02:30:53 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:30:53.611651 | orchestrator | 2025-05-14 02:30:53 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:30:53.615047 | orchestrator | 2025-05-14 02:30:53 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:30:53.616964 | orchestrator | 2025-05-14 02:30:53 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:30:53.616995 | orchestrator | 2025-05-14 02:30:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:30:56.646847 | orchestrator | 2025-05-14 02:30:56 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:30:56.647035 | orchestrator | 2025-05-14 02:30:56 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:30:56.647937 | orchestrator | 2025-05-14 02:30:56 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:30:56.649154 | orchestrator | 2025-05-14 02:30:56 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:30:56.649177 | orchestrator | 2025-05-14 02:30:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:30:59.691101 | orchestrator | 2025-05-14 02:30:59 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:30:59.694897 | orchestrator | 2025-05-14 02:30:59 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:30:59.698391 | orchestrator | 2025-05-14 02:30:59 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:30:59.699550 | orchestrator | 2025-05-14 02:30:59 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:30:59.699683 | orchestrator | 2025-05-14 02:30:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:31:02.739397 | orchestrator | 2025-05-14 02:31:02 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:31:02.739694 | orchestrator | 2025-05-14 02:31:02 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:31:02.740626 | orchestrator | 2025-05-14 02:31:02 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:31:02.743089 | orchestrator | 2025-05-14 02:31:02 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:31:02.743143 | orchestrator | 2025-05-14 02:31:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:31:05.790785 | orchestrator | 2025-05-14 02:31:05 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:31:05.791187 | orchestrator | 2025-05-14 02:31:05 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state STARTED 2025-05-14 02:31:05.792258 | orchestrator | 2025-05-14 02:31:05 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:31:05.795635 | orchestrator | 2025-05-14 02:31:05 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:31:05.795805 | orchestrator | 2025-05-14 02:31:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:31:08.829737 | orchestrator | 2025-05-14 02:31:08 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:31:08.830462 | orchestrator | 2025-05-14 02:31:08 | INFO  | Task aff3b234-d99b-45c5-983c-941c07deab78 is in state SUCCESS 2025-05-14 02:31:08.833135 | orchestrator | 2025-05-14 02:31:08.833192 | orchestrator | 2025-05-14 02:31:08.833212 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:31:08.833232 | orchestrator | 2025-05-14 02:31:08.833250 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:31:08.833269 | orchestrator | Wednesday 14 May 2025 02:28:45 +0000 (0:00:00.173) 0:00:00.173 ********* 2025-05-14 02:31:08.833288 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:31:08.833308 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:31:08.833327 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:31:08.833345 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:31:08.833363 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:31:08.833376 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:31:08.833387 | orchestrator | 2025-05-14 02:31:08.833398 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:31:08.833409 | orchestrator | Wednesday 14 May 2025 02:28:45 +0000 (0:00:00.615) 0:00:00.789 ********* 2025-05-14 02:31:08.833420 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-05-14 02:31:08.833432 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-05-14 02:31:08.833442 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-05-14 02:31:08.833453 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-05-14 02:31:08.833463 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-05-14 02:31:08.833474 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-05-14 02:31:08.833484 | orchestrator | 2025-05-14 02:31:08.833495 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-05-14 02:31:08.833520 | orchestrator | 2025-05-14 02:31:08.833531 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-05-14 02:31:08.833542 | orchestrator | Wednesday 14 May 2025 02:28:46 +0000 (0:00:00.971) 0:00:01.760 ********* 2025-05-14 02:31:08.833554 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:31:08.833566 | orchestrator | 2025-05-14 02:31:08.833577 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-05-14 02:31:08.833587 | orchestrator | Wednesday 14 May 2025 02:28:47 +0000 (0:00:01.124) 0:00:02.885 ********* 2025-05-14 02:31:08.833601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.833615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.833626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.833670 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.833682 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.833733 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.833748 | orchestrator | 2025-05-14 02:31:08.833759 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-05-14 02:31:08.833770 | orchestrator | Wednesday 14 May 2025 02:28:48 +0000 (0:00:01.005) 0:00:03.890 ********* 2025-05-14 02:31:08.833781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.833793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.833804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.833815 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.833827 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.833851 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.833862 | orchestrator | 2025-05-14 02:31:08.833873 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-05-14 02:31:08.833884 | orchestrator | Wednesday 14 May 2025 02:28:50 +0000 (0:00:01.992) 0:00:05.882 ********* 2025-05-14 02:31:08.833896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.833907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.833931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.833944 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.833955 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.833966 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.833977 | orchestrator | 2025-05-14 02:31:08.833988 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-05-14 02:31:08.833999 | orchestrator | Wednesday 14 May 2025 02:28:52 +0000 (0:00:01.716) 0:00:07.598 ********* 2025-05-14 02:31:08.834010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.834126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.834144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.834155 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.834166 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.834186 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.834198 | orchestrator | 2025-05-14 02:31:08.834209 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-05-14 02:31:08.834220 | orchestrator | Wednesday 14 May 2025 02:28:55 +0000 (0:00:02.489) 0:00:10.088 ********* 2025-05-14 02:31:08.834232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.834243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.834254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.834271 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.834283 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.834298 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.834310 | orchestrator | 2025-05-14 02:31:08.834328 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-05-14 02:31:08.834347 | orchestrator | Wednesday 14 May 2025 02:28:56 +0000 (0:00:01.445) 0:00:11.534 ********* 2025-05-14 02:31:08.834366 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:31:08.834384 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:31:08.834404 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:31:08.834423 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:31:08.834443 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:31:08.834463 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:31:08.834482 | orchestrator | 2025-05-14 02:31:08.834501 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-05-14 02:31:08.834512 | orchestrator | Wednesday 14 May 2025 02:28:59 +0000 (0:00:03.282) 0:00:14.816 ********* 2025-05-14 02:31:08.834523 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-05-14 02:31:08.834534 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-05-14 02:31:08.834545 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-05-14 02:31:08.834564 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-05-14 02:31:08.834575 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-05-14 02:31:08.834586 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-05-14 02:31:08.834597 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-14 02:31:08.834608 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-14 02:31:08.834619 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-14 02:31:08.834629 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-14 02:31:08.834640 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-14 02:31:08.834651 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-14 02:31:08.834663 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-14 02:31:08.834684 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-14 02:31:08.834695 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-14 02:31:08.834748 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-14 02:31:08.834763 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-14 02:31:08.834774 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-14 02:31:08.834785 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-14 02:31:08.834797 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-14 02:31:08.834808 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-14 02:31:08.834819 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-14 02:31:08.834830 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-14 02:31:08.834841 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-14 02:31:08.834852 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-14 02:31:08.834863 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-14 02:31:08.834874 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-14 02:31:08.834885 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-14 02:31:08.834895 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-14 02:31:08.834912 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-14 02:31:08.834923 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-14 02:31:08.834934 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-14 02:31:08.834945 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-14 02:31:08.834956 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-14 02:31:08.834967 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-14 02:31:08.834978 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-14 02:31:08.834988 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-14 02:31:08.834999 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-14 02:31:08.835010 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-14 02:31:08.835021 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-14 02:31:08.835039 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-14 02:31:08.835050 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-14 02:31:08.835068 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-05-14 02:31:08.835079 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-05-14 02:31:08.835090 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-05-14 02:31:08.835101 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-05-14 02:31:08.835112 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-05-14 02:31:08.835123 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-05-14 02:31:08.835134 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-14 02:31:08.835145 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-14 02:31:08.835156 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-14 02:31:08.835167 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-14 02:31:08.835177 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-14 02:31:08.835188 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-14 02:31:08.835199 | orchestrator | 2025-05-14 02:31:08.835210 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-14 02:31:08.835221 | orchestrator | Wednesday 14 May 2025 02:29:18 +0000 (0:00:18.803) 0:00:33.620 ********* 2025-05-14 02:31:08.835232 | orchestrator | 2025-05-14 02:31:08.835243 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-14 02:31:08.835253 | orchestrator | Wednesday 14 May 2025 02:29:18 +0000 (0:00:00.102) 0:00:33.722 ********* 2025-05-14 02:31:08.835264 | orchestrator | 2025-05-14 02:31:08.835275 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-14 02:31:08.835285 | orchestrator | Wednesday 14 May 2025 02:29:19 +0000 (0:00:00.239) 0:00:33.962 ********* 2025-05-14 02:31:08.835296 | orchestrator | 2025-05-14 02:31:08.835307 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-14 02:31:08.835317 | orchestrator | Wednesday 14 May 2025 02:29:19 +0000 (0:00:00.053) 0:00:34.016 ********* 2025-05-14 02:31:08.835328 | orchestrator | 2025-05-14 02:31:08.835339 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-14 02:31:08.835350 | orchestrator | Wednesday 14 May 2025 02:29:19 +0000 (0:00:00.054) 0:00:34.070 ********* 2025-05-14 02:31:08.835360 | orchestrator | 2025-05-14 02:31:08.835371 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-14 02:31:08.835382 | orchestrator | Wednesday 14 May 2025 02:29:19 +0000 (0:00:00.065) 0:00:34.135 ********* 2025-05-14 02:31:08.835393 | orchestrator | 2025-05-14 02:31:08.835403 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-05-14 02:31:08.835414 | orchestrator | Wednesday 14 May 2025 02:29:19 +0000 (0:00:00.056) 0:00:34.191 ********* 2025-05-14 02:31:08.835425 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:31:08.835441 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:31:08.835453 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:31:08.835473 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:31:08.835492 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:31:08.835521 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:31:08.835540 | orchestrator | 2025-05-14 02:31:08.835558 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-05-14 02:31:08.835578 | orchestrator | Wednesday 14 May 2025 02:29:21 +0000 (0:00:01.933) 0:00:36.125 ********* 2025-05-14 02:31:08.835597 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:31:08.835617 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:31:08.835635 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:31:08.835649 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:31:08.835660 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:31:08.835671 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:31:08.835681 | orchestrator | 2025-05-14 02:31:08.835692 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-05-14 02:31:08.835756 | orchestrator | 2025-05-14 02:31:08.835771 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-14 02:31:08.835782 | orchestrator | Wednesday 14 May 2025 02:29:45 +0000 (0:00:24.183) 0:01:00.308 ********* 2025-05-14 02:31:08.835793 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:31:08.835804 | orchestrator | 2025-05-14 02:31:08.835814 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-14 02:31:08.835825 | orchestrator | Wednesday 14 May 2025 02:29:46 +0000 (0:00:01.312) 0:01:01.621 ********* 2025-05-14 02:31:08.835836 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:31:08.835847 | orchestrator | 2025-05-14 02:31:08.835867 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-05-14 02:31:08.835878 | orchestrator | Wednesday 14 May 2025 02:29:47 +0000 (0:00:01.302) 0:01:02.923 ********* 2025-05-14 02:31:08.835888 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:31:08.835898 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:31:08.835908 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:31:08.835918 | orchestrator | 2025-05-14 02:31:08.835927 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-05-14 02:31:08.835936 | orchestrator | Wednesday 14 May 2025 02:29:49 +0000 (0:00:01.342) 0:01:04.265 ********* 2025-05-14 02:31:08.835946 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:31:08.835956 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:31:08.835965 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:31:08.835975 | orchestrator | 2025-05-14 02:31:08.835984 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-05-14 02:31:08.835994 | orchestrator | Wednesday 14 May 2025 02:29:49 +0000 (0:00:00.397) 0:01:04.663 ********* 2025-05-14 02:31:08.836003 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:31:08.836013 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:31:08.836023 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:31:08.836032 | orchestrator | 2025-05-14 02:31:08.836042 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-05-14 02:31:08.836051 | orchestrator | Wednesday 14 May 2025 02:29:50 +0000 (0:00:00.749) 0:01:05.412 ********* 2025-05-14 02:31:08.836060 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:31:08.836070 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:31:08.836079 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:31:08.836089 | orchestrator | 2025-05-14 02:31:08.836099 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-05-14 02:31:08.836108 | orchestrator | Wednesday 14 May 2025 02:29:51 +0000 (0:00:00.671) 0:01:06.084 ********* 2025-05-14 02:31:08.836118 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:31:08.836127 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:31:08.836137 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:31:08.836147 | orchestrator | 2025-05-14 02:31:08.836156 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-05-14 02:31:08.836166 | orchestrator | Wednesday 14 May 2025 02:29:51 +0000 (0:00:00.378) 0:01:06.463 ********* 2025-05-14 02:31:08.836183 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:31:08.836192 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:31:08.836202 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:31:08.836211 | orchestrator | 2025-05-14 02:31:08.836221 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-05-14 02:31:08.836230 | orchestrator | Wednesday 14 May 2025 02:29:52 +0000 (0:00:00.649) 0:01:07.113 ********* 2025-05-14 02:31:08.836240 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:31:08.836250 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:31:08.836259 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:31:08.836268 | orchestrator | 2025-05-14 02:31:08.836278 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-05-14 02:31:08.836287 | orchestrator | Wednesday 14 May 2025 02:29:53 +0000 (0:00:00.937) 0:01:08.050 ********* 2025-05-14 02:31:08.836297 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:31:08.836306 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:31:08.836316 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:31:08.836325 | orchestrator | 2025-05-14 02:31:08.836335 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-05-14 02:31:08.836345 | orchestrator | Wednesday 14 May 2025 02:29:54 +0000 (0:00:00.914) 0:01:08.965 ********* 2025-05-14 02:31:08.836354 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:31:08.836364 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:31:08.836373 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:31:08.836383 | orchestrator | 2025-05-14 02:31:08.836392 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-05-14 02:31:08.836401 | orchestrator | Wednesday 14 May 2025 02:29:54 +0000 (0:00:00.585) 0:01:09.551 ********* 2025-05-14 02:31:08.836411 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:31:08.836420 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:31:08.836429 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:31:08.836439 | orchestrator | 2025-05-14 02:31:08.836448 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-05-14 02:31:08.836458 | orchestrator | Wednesday 14 May 2025 02:29:54 +0000 (0:00:00.388) 0:01:09.939 ********* 2025-05-14 02:31:08.836467 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:31:08.836477 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:31:08.836492 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:31:08.836501 | orchestrator | 2025-05-14 02:31:08.836511 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-05-14 02:31:08.836521 | orchestrator | Wednesday 14 May 2025 02:29:55 +0000 (0:00:00.356) 0:01:10.296 ********* 2025-05-14 02:31:08.836530 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:31:08.836540 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:31:08.836549 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:31:08.836559 | orchestrator | 2025-05-14 02:31:08.836568 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-05-14 02:31:08.836578 | orchestrator | Wednesday 14 May 2025 02:29:55 +0000 (0:00:00.460) 0:01:10.757 ********* 2025-05-14 02:31:08.836587 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:31:08.836596 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:31:08.836606 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:31:08.836615 | orchestrator | 2025-05-14 02:31:08.836625 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-05-14 02:31:08.836634 | orchestrator | Wednesday 14 May 2025 02:29:56 +0000 (0:00:00.302) 0:01:11.059 ********* 2025-05-14 02:31:08.836644 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:31:08.836653 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:31:08.836663 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:31:08.836672 | orchestrator | 2025-05-14 02:31:08.836682 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-05-14 02:31:08.836691 | orchestrator | Wednesday 14 May 2025 02:29:56 +0000 (0:00:00.323) 0:01:11.383 ********* 2025-05-14 02:31:08.836717 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:31:08.836733 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:31:08.836743 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:31:08.836752 | orchestrator | 2025-05-14 02:31:08.836767 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-05-14 02:31:08.836776 | orchestrator | Wednesday 14 May 2025 02:29:56 +0000 (0:00:00.328) 0:01:11.711 ********* 2025-05-14 02:31:08.836786 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:31:08.836795 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:31:08.836805 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:31:08.836815 | orchestrator | 2025-05-14 02:31:08.836824 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-05-14 02:31:08.836834 | orchestrator | Wednesday 14 May 2025 02:29:56 +0000 (0:00:00.224) 0:01:11.935 ********* 2025-05-14 02:31:08.836843 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:31:08.836853 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:31:08.836862 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:31:08.836871 | orchestrator | 2025-05-14 02:31:08.836881 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-14 02:31:08.836890 | orchestrator | Wednesday 14 May 2025 02:29:57 +0000 (0:00:00.358) 0:01:12.294 ********* 2025-05-14 02:31:08.836900 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:31:08.836910 | orchestrator | 2025-05-14 02:31:08.836919 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-05-14 02:31:08.836929 | orchestrator | Wednesday 14 May 2025 02:29:57 +0000 (0:00:00.630) 0:01:12.925 ********* 2025-05-14 02:31:08.836938 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:31:08.836948 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:31:08.836957 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:31:08.836967 | orchestrator | 2025-05-14 02:31:08.836977 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-05-14 02:31:08.836986 | orchestrator | Wednesday 14 May 2025 02:29:58 +0000 (0:00:00.394) 0:01:13.320 ********* 2025-05-14 02:31:08.836996 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:31:08.837005 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:31:08.837015 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:31:08.837024 | orchestrator | 2025-05-14 02:31:08.837034 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-05-14 02:31:08.837043 | orchestrator | Wednesday 14 May 2025 02:29:58 +0000 (0:00:00.589) 0:01:13.910 ********* 2025-05-14 02:31:08.837053 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:31:08.837062 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:31:08.837072 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:31:08.837081 | orchestrator | 2025-05-14 02:31:08.837090 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-05-14 02:31:08.837100 | orchestrator | Wednesday 14 May 2025 02:29:59 +0000 (0:00:00.506) 0:01:14.416 ********* 2025-05-14 02:31:08.837109 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:31:08.837119 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:31:08.837128 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:31:08.837138 | orchestrator | 2025-05-14 02:31:08.837147 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-05-14 02:31:08.837157 | orchestrator | Wednesday 14 May 2025 02:30:00 +0000 (0:00:00.564) 0:01:14.980 ********* 2025-05-14 02:31:08.837166 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:31:08.837176 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:31:08.837185 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:31:08.837195 | orchestrator | 2025-05-14 02:31:08.837204 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-05-14 02:31:08.837213 | orchestrator | Wednesday 14 May 2025 02:30:00 +0000 (0:00:00.356) 0:01:15.336 ********* 2025-05-14 02:31:08.837223 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:31:08.837232 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:31:08.837242 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:31:08.837260 | orchestrator | 2025-05-14 02:31:08.837270 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-05-14 02:31:08.837279 | orchestrator | Wednesday 14 May 2025 02:30:00 +0000 (0:00:00.495) 0:01:15.831 ********* 2025-05-14 02:31:08.837289 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:31:08.837299 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:31:08.837308 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:31:08.837318 | orchestrator | 2025-05-14 02:31:08.837327 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-05-14 02:31:08.837337 | orchestrator | Wednesday 14 May 2025 02:30:01 +0000 (0:00:00.644) 0:01:16.476 ********* 2025-05-14 02:31:08.837346 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:31:08.837355 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:31:08.837365 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:31:08.837374 | orchestrator | 2025-05-14 02:31:08.837384 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-05-14 02:31:08.837393 | orchestrator | Wednesday 14 May 2025 02:30:02 +0000 (0:00:00.605) 0:01:17.082 ********* 2025-05-14 02:31:08.837403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.837415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.837431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.837442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.837454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.837465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.837475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.837490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.837570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.837593 | orchestrator | 2025-05-14 02:31:08.837603 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-05-14 02:31:08.837617 | orchestrator | Wednesday 14 May 2025 02:30:03 +0000 (0:00:01.677) 0:01:18.760 ********* 2025-05-14 02:31:08.837639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.837657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.837674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.837757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.837782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.837798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.837816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.837835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.837845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.837856 | orchestrator | 2025-05-14 02:31:08.837866 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-05-14 02:31:08.837876 | orchestrator | Wednesday 14 May 2025 02:30:08 +0000 (0:00:04.693) 0:01:23.454 ********* 2025-05-14 02:31:08.837891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.837899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.837907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.837922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.837931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.837939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.837946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.837960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.837968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.837976 | orchestrator | 2025-05-14 02:31:08.837983 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-14 02:31:08.837991 | orchestrator | Wednesday 14 May 2025 02:30:10 +0000 (0:00:02.397) 0:01:25.851 ********* 2025-05-14 02:31:08.837999 | orchestrator | 2025-05-14 02:31:08.838007 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-14 02:31:08.838042 | orchestrator | Wednesday 14 May 2025 02:30:10 +0000 (0:00:00.072) 0:01:25.924 ********* 2025-05-14 02:31:08.838053 | orchestrator | 2025-05-14 02:31:08.838061 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-14 02:31:08.838069 | orchestrator | Wednesday 14 May 2025 02:30:11 +0000 (0:00:00.064) 0:01:25.988 ********* 2025-05-14 02:31:08.838076 | orchestrator | 2025-05-14 02:31:08.838084 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-05-14 02:31:08.838092 | orchestrator | Wednesday 14 May 2025 02:30:11 +0000 (0:00:00.234) 0:01:26.223 ********* 2025-05-14 02:31:08.838103 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:31:08.838111 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:31:08.838119 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:31:08.838127 | orchestrator | 2025-05-14 02:31:08.838135 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-05-14 02:31:08.838142 | orchestrator | Wednesday 14 May 2025 02:30:18 +0000 (0:00:07.546) 0:01:33.769 ********* 2025-05-14 02:31:08.838150 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:31:08.838158 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:31:08.838165 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:31:08.838173 | orchestrator | 2025-05-14 02:31:08.838181 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-05-14 02:31:08.838189 | orchestrator | Wednesday 14 May 2025 02:30:26 +0000 (0:00:07.597) 0:01:41.367 ********* 2025-05-14 02:31:08.838196 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:31:08.838204 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:31:08.838212 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:31:08.838220 | orchestrator | 2025-05-14 02:31:08.838227 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-05-14 02:31:08.838235 | orchestrator | Wednesday 14 May 2025 02:30:29 +0000 (0:00:02.814) 0:01:44.182 ********* 2025-05-14 02:31:08.838243 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:31:08.838250 | orchestrator | 2025-05-14 02:31:08.838258 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-05-14 02:31:08.838266 | orchestrator | Wednesday 14 May 2025 02:30:29 +0000 (0:00:00.136) 0:01:44.318 ********* 2025-05-14 02:31:08.838274 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:31:08.838281 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:31:08.838289 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:31:08.838303 | orchestrator | 2025-05-14 02:31:08.838316 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-05-14 02:31:08.838324 | orchestrator | Wednesday 14 May 2025 02:30:30 +0000 (0:00:01.030) 0:01:45.348 ********* 2025-05-14 02:31:08.838332 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:31:08.838340 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:31:08.838348 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:31:08.838355 | orchestrator | 2025-05-14 02:31:08.838363 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-05-14 02:31:08.838371 | orchestrator | Wednesday 14 May 2025 02:30:31 +0000 (0:00:00.643) 0:01:45.991 ********* 2025-05-14 02:31:08.838378 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:31:08.838386 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:31:08.838394 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:31:08.838402 | orchestrator | 2025-05-14 02:31:08.838410 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-05-14 02:31:08.838417 | orchestrator | Wednesday 14 May 2025 02:30:31 +0000 (0:00:00.934) 0:01:46.925 ********* 2025-05-14 02:31:08.838425 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:31:08.838433 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:31:08.838440 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:31:08.838448 | orchestrator | 2025-05-14 02:31:08.838456 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-05-14 02:31:08.838464 | orchestrator | Wednesday 14 May 2025 02:30:32 +0000 (0:00:00.615) 0:01:47.541 ********* 2025-05-14 02:31:08.838472 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:31:08.838480 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:31:08.838487 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:31:08.838495 | orchestrator | 2025-05-14 02:31:08.838503 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-05-14 02:31:08.838511 | orchestrator | Wednesday 14 May 2025 02:30:33 +0000 (0:00:01.096) 0:01:48.637 ********* 2025-05-14 02:31:08.838519 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:31:08.838526 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:31:08.838534 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:31:08.838542 | orchestrator | 2025-05-14 02:31:08.838549 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-05-14 02:31:08.838557 | orchestrator | Wednesday 14 May 2025 02:30:34 +0000 (0:00:00.661) 0:01:49.299 ********* 2025-05-14 02:31:08.838565 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:31:08.838573 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:31:08.838581 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:31:08.838588 | orchestrator | 2025-05-14 02:31:08.838596 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-05-14 02:31:08.838604 | orchestrator | Wednesday 14 May 2025 02:30:34 +0000 (0:00:00.341) 0:01:49.640 ********* 2025-05-14 02:31:08.838612 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.838620 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.838628 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.838646 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.838654 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.838662 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.838675 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.838683 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.838691 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.838699 | orchestrator | 2025-05-14 02:31:08.838731 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-05-14 02:31:08.838739 | orchestrator | Wednesday 14 May 2025 02:30:36 +0000 (0:00:01.519) 0:01:51.159 ********* 2025-05-14 02:31:08.838747 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.838755 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.838763 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.838784 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.838798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.838812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.838834 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.838848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.838862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.838876 | orchestrator | 2025-05-14 02:31:08.838888 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-05-14 02:31:08.838904 | orchestrator | Wednesday 14 May 2025 02:30:40 +0000 (0:00:03.906) 0:01:55.065 ********* 2025-05-14 02:31:08.838913 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.838921 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.838929 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.838944 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.838956 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.838965 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.838972 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.838986 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.838995 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:31:08.839003 | orchestrator | 2025-05-14 02:31:08.839011 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-14 02:31:08.839019 | orchestrator | Wednesday 14 May 2025 02:30:43 +0000 (0:00:03.062) 0:01:58.128 ********* 2025-05-14 02:31:08.839026 | orchestrator | 2025-05-14 02:31:08.839034 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-14 02:31:08.839042 | orchestrator | Wednesday 14 May 2025 02:30:43 +0000 (0:00:00.162) 0:01:58.290 ********* 2025-05-14 02:31:08.839050 | orchestrator | 2025-05-14 02:31:08.839057 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-14 02:31:08.839065 | orchestrator | Wednesday 14 May 2025 02:30:43 +0000 (0:00:00.058) 0:01:58.349 ********* 2025-05-14 02:31:08.839073 | orchestrator | 2025-05-14 02:31:08.839080 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-05-14 02:31:08.839088 | orchestrator | Wednesday 14 May 2025 02:30:43 +0000 (0:00:00.053) 0:01:58.402 ********* 2025-05-14 02:31:08.839096 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:31:08.839104 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:31:08.839112 | orchestrator | 2025-05-14 02:31:08.839119 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-05-14 02:31:08.839132 | orchestrator | Wednesday 14 May 2025 02:30:49 +0000 (0:00:06.114) 0:02:04.517 ********* 2025-05-14 02:31:08.839140 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:31:08.839147 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:31:08.839155 | orchestrator | 2025-05-14 02:31:08.839163 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-05-14 02:31:08.839170 | orchestrator | Wednesday 14 May 2025 02:30:56 +0000 (0:00:06.584) 0:02:11.102 ********* 2025-05-14 02:31:08.839178 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:31:08.839186 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:31:08.839194 | orchestrator | 2025-05-14 02:31:08.839201 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-05-14 02:31:08.839209 | orchestrator | Wednesday 14 May 2025 02:31:02 +0000 (0:00:06.262) 0:02:17.364 ********* 2025-05-14 02:31:08.839217 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:31:08.839224 | orchestrator | 2025-05-14 02:31:08.839232 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-05-14 02:31:08.839240 | orchestrator | Wednesday 14 May 2025 02:31:02 +0000 (0:00:00.213) 0:02:17.578 ********* 2025-05-14 02:31:08.839248 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:31:08.839255 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:31:08.839263 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:31:08.839271 | orchestrator | 2025-05-14 02:31:08.839279 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-05-14 02:31:08.839287 | orchestrator | Wednesday 14 May 2025 02:31:03 +0000 (0:00:00.881) 0:02:18.460 ********* 2025-05-14 02:31:08.839294 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:31:08.839302 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:31:08.839309 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:31:08.839317 | orchestrator | 2025-05-14 02:31:08.839325 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-05-14 02:31:08.839333 | orchestrator | Wednesday 14 May 2025 02:31:04 +0000 (0:00:00.689) 0:02:19.150 ********* 2025-05-14 02:31:08.839340 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:31:08.839348 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:31:08.839356 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:31:08.839364 | orchestrator | 2025-05-14 02:31:08.839378 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-05-14 02:31:08.839386 | orchestrator | Wednesday 14 May 2025 02:31:05 +0000 (0:00:00.951) 0:02:20.101 ********* 2025-05-14 02:31:08.839393 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:31:08.839401 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:31:08.839408 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:31:08.839416 | orchestrator | 2025-05-14 02:31:08.839424 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-05-14 02:31:08.839432 | orchestrator | Wednesday 14 May 2025 02:31:05 +0000 (0:00:00.836) 0:02:20.938 ********* 2025-05-14 02:31:08.839439 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:31:08.839447 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:31:08.839455 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:31:08.839463 | orchestrator | 2025-05-14 02:31:08.839470 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-05-14 02:31:08.839478 | orchestrator | Wednesday 14 May 2025 02:31:06 +0000 (0:00:00.813) 0:02:21.751 ********* 2025-05-14 02:31:08.839486 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:31:08.839493 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:31:08.839501 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:31:08.839509 | orchestrator | 2025-05-14 02:31:08.839517 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:31:08.839526 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-05-14 02:31:08.839534 | orchestrator | testbed-node-1 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-05-14 02:31:08.839546 | orchestrator | testbed-node-2 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-05-14 02:31:08.839558 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:31:08.839567 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:31:08.839574 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:31:08.839582 | orchestrator | 2025-05-14 02:31:08.839590 | orchestrator | 2025-05-14 02:31:08.839598 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:31:08.839606 | orchestrator | Wednesday 14 May 2025 02:31:08 +0000 (0:00:01.279) 0:02:23.030 ********* 2025-05-14 02:31:08.839613 | orchestrator | =============================================================================== 2025-05-14 02:31:08.839621 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 24.18s 2025-05-14 02:31:08.839629 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.80s 2025-05-14 02:31:08.839637 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 14.18s 2025-05-14 02:31:08.839644 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.66s 2025-05-14 02:31:08.839652 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 9.08s 2025-05-14 02:31:08.839659 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.69s 2025-05-14 02:31:08.839667 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.91s 2025-05-14 02:31:08.839675 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.28s 2025-05-14 02:31:08.839682 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.06s 2025-05-14 02:31:08.839690 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.49s 2025-05-14 02:31:08.839698 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.40s 2025-05-14 02:31:08.839728 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.99s 2025-05-14 02:31:08.839743 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.93s 2025-05-14 02:31:08.839756 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.72s 2025-05-14 02:31:08.839769 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.68s 2025-05-14 02:31:08.839777 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.52s 2025-05-14 02:31:08.839785 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.45s 2025-05-14 02:31:08.839793 | orchestrator | ovn-db : Checking for any existing OVN DB container volumes ------------- 1.34s 2025-05-14 02:31:08.839800 | orchestrator | ovn-db : include_tasks -------------------------------------------------- 1.31s 2025-05-14 02:31:08.839808 | orchestrator | ovn-db : include_tasks -------------------------------------------------- 1.30s 2025-05-14 02:31:08.839816 | orchestrator | 2025-05-14 02:31:08 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:31:08.839824 | orchestrator | 2025-05-14 02:31:08 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:31:08.839831 | orchestrator | 2025-05-14 02:31:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:31:11.883820 | orchestrator | 2025-05-14 02:31:11 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:31:11.885236 | orchestrator | 2025-05-14 02:31:11 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:31:11.889289 | orchestrator | 2025-05-14 02:31:11 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:31:11.889384 | orchestrator | 2025-05-14 02:31:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:31:14.943462 | orchestrator | 2025-05-14 02:31:14 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:31:14.945044 | orchestrator | 2025-05-14 02:31:14 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:31:14.946577 | orchestrator | 2025-05-14 02:31:14 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:31:14.946659 | orchestrator | 2025-05-14 02:31:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:31:17.979877 | orchestrator | 2025-05-14 02:31:17 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:31:17.979981 | orchestrator | 2025-05-14 02:31:17 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:31:17.980286 | orchestrator | 2025-05-14 02:31:17 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:31:17.980379 | orchestrator | 2025-05-14 02:31:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:31:21.017673 | orchestrator | 2025-05-14 02:31:21 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:31:21.017825 | orchestrator | 2025-05-14 02:31:21 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:31:21.018352 | orchestrator | 2025-05-14 02:31:21 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:31:21.018379 | orchestrator | 2025-05-14 02:31:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:31:24.055795 | orchestrator | 2025-05-14 02:31:24 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:31:24.057252 | orchestrator | 2025-05-14 02:31:24 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:31:24.060256 | orchestrator | 2025-05-14 02:31:24 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:31:24.060578 | orchestrator | 2025-05-14 02:31:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:31:27.095768 | orchestrator | 2025-05-14 02:31:27 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:31:27.098122 | orchestrator | 2025-05-14 02:31:27 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:31:27.100000 | orchestrator | 2025-05-14 02:31:27 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:31:27.100034 | orchestrator | 2025-05-14 02:31:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:31:30.156300 | orchestrator | 2025-05-14 02:31:30 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:31:30.160901 | orchestrator | 2025-05-14 02:31:30 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:31:30.162395 | orchestrator | 2025-05-14 02:31:30 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:31:30.162473 | orchestrator | 2025-05-14 02:31:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:31:33.205490 | orchestrator | 2025-05-14 02:31:33 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:31:33.207190 | orchestrator | 2025-05-14 02:31:33 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:31:33.211046 | orchestrator | 2025-05-14 02:31:33 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:31:33.211072 | orchestrator | 2025-05-14 02:31:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:31:36.257408 | orchestrator | 2025-05-14 02:31:36 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:31:36.257508 | orchestrator | 2025-05-14 02:31:36 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:31:36.258237 | orchestrator | 2025-05-14 02:31:36 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:31:36.258268 | orchestrator | 2025-05-14 02:31:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:31:39.305683 | orchestrator | 2025-05-14 02:31:39 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:31:39.305874 | orchestrator | 2025-05-14 02:31:39 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:31:39.306263 | orchestrator | 2025-05-14 02:31:39 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:31:39.306291 | orchestrator | 2025-05-14 02:31:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:31:42.350572 | orchestrator | 2025-05-14 02:31:42 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:31:42.352117 | orchestrator | 2025-05-14 02:31:42 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:31:42.354853 | orchestrator | 2025-05-14 02:31:42 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:31:42.354914 | orchestrator | 2025-05-14 02:31:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:31:45.396619 | orchestrator | 2025-05-14 02:31:45 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:31:45.401025 | orchestrator | 2025-05-14 02:31:45 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:31:45.403048 | orchestrator | 2025-05-14 02:31:45 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:31:45.404056 | orchestrator | 2025-05-14 02:31:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:31:48.467416 | orchestrator | 2025-05-14 02:31:48 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:31:48.469280 | orchestrator | 2025-05-14 02:31:48 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:31:48.471283 | orchestrator | 2025-05-14 02:31:48 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:31:48.471355 | orchestrator | 2025-05-14 02:31:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:31:51.523529 | orchestrator | 2025-05-14 02:31:51 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:31:51.527128 | orchestrator | 2025-05-14 02:31:51 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:31:51.531568 | orchestrator | 2025-05-14 02:31:51 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:31:51.531631 | orchestrator | 2025-05-14 02:31:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:31:54.580914 | orchestrator | 2025-05-14 02:31:54 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:31:54.585567 | orchestrator | 2025-05-14 02:31:54 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:31:54.585730 | orchestrator | 2025-05-14 02:31:54 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:31:54.585756 | orchestrator | 2025-05-14 02:31:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:31:57.644815 | orchestrator | 2025-05-14 02:31:57 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:31:57.648567 | orchestrator | 2025-05-14 02:31:57 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:31:57.651315 | orchestrator | 2025-05-14 02:31:57 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:31:57.651357 | orchestrator | 2025-05-14 02:31:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:32:00.698340 | orchestrator | 2025-05-14 02:32:00 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:32:00.700042 | orchestrator | 2025-05-14 02:32:00 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:32:00.702494 | orchestrator | 2025-05-14 02:32:00 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:32:00.702597 | orchestrator | 2025-05-14 02:32:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:32:03.752049 | orchestrator | 2025-05-14 02:32:03 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:32:03.753770 | orchestrator | 2025-05-14 02:32:03 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:32:03.755529 | orchestrator | 2025-05-14 02:32:03 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:32:03.755751 | orchestrator | 2025-05-14 02:32:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:32:06.822966 | orchestrator | 2025-05-14 02:32:06 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:32:06.823112 | orchestrator | 2025-05-14 02:32:06 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:32:06.823135 | orchestrator | 2025-05-14 02:32:06 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:32:06.823154 | orchestrator | 2025-05-14 02:32:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:32:09.878801 | orchestrator | 2025-05-14 02:32:09 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:32:09.882143 | orchestrator | 2025-05-14 02:32:09 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:32:09.884523 | orchestrator | 2025-05-14 02:32:09 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:32:09.884594 | orchestrator | 2025-05-14 02:32:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:32:12.932144 | orchestrator | 2025-05-14 02:32:12 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:32:12.933844 | orchestrator | 2025-05-14 02:32:12 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:32:12.935293 | orchestrator | 2025-05-14 02:32:12 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:32:12.935895 | orchestrator | 2025-05-14 02:32:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:32:15.998590 | orchestrator | 2025-05-14 02:32:15 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:32:16.000447 | orchestrator | 2025-05-14 02:32:15 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:32:16.003411 | orchestrator | 2025-05-14 02:32:16 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:32:16.003638 | orchestrator | 2025-05-14 02:32:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:32:19.050859 | orchestrator | 2025-05-14 02:32:19 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:32:19.051117 | orchestrator | 2025-05-14 02:32:19 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:32:19.051843 | orchestrator | 2025-05-14 02:32:19 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:32:19.051872 | orchestrator | 2025-05-14 02:32:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:32:22.097774 | orchestrator | 2025-05-14 02:32:22 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:32:22.098192 | orchestrator | 2025-05-14 02:32:22 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:32:22.098855 | orchestrator | 2025-05-14 02:32:22 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:32:22.098924 | orchestrator | 2025-05-14 02:32:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:32:25.137865 | orchestrator | 2025-05-14 02:32:25 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:32:25.138632 | orchestrator | 2025-05-14 02:32:25 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:32:25.139431 | orchestrator | 2025-05-14 02:32:25 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:32:25.143036 | orchestrator | 2025-05-14 02:32:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:32:28.186242 | orchestrator | 2025-05-14 02:32:28 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:32:28.189133 | orchestrator | 2025-05-14 02:32:28 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:32:28.192514 | orchestrator | 2025-05-14 02:32:28 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:32:28.192583 | orchestrator | 2025-05-14 02:32:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:32:31.235568 | orchestrator | 2025-05-14 02:32:31 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:32:31.236100 | orchestrator | 2025-05-14 02:32:31 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:32:31.237080 | orchestrator | 2025-05-14 02:32:31 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:32:31.237207 | orchestrator | 2025-05-14 02:32:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:32:34.294959 | orchestrator | 2025-05-14 02:32:34 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:32:34.298333 | orchestrator | 2025-05-14 02:32:34 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:32:34.300370 | orchestrator | 2025-05-14 02:32:34 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:32:34.300444 | orchestrator | 2025-05-14 02:32:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:32:37.340255 | orchestrator | 2025-05-14 02:32:37 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:32:37.340364 | orchestrator | 2025-05-14 02:32:37 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:32:37.340461 | orchestrator | 2025-05-14 02:32:37 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:32:37.343974 | orchestrator | 2025-05-14 02:32:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:32:40.382389 | orchestrator | 2025-05-14 02:32:40 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:32:40.385779 | orchestrator | 2025-05-14 02:32:40 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:32:40.386831 | orchestrator | 2025-05-14 02:32:40 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:32:40.386868 | orchestrator | 2025-05-14 02:32:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:32:43.427287 | orchestrator | 2025-05-14 02:32:43 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:32:43.429974 | orchestrator | 2025-05-14 02:32:43 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:32:43.429990 | orchestrator | 2025-05-14 02:32:43 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:32:43.429997 | orchestrator | 2025-05-14 02:32:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:32:46.462960 | orchestrator | 2025-05-14 02:32:46 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:32:46.463234 | orchestrator | 2025-05-14 02:32:46 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:32:46.463937 | orchestrator | 2025-05-14 02:32:46 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:32:46.464072 | orchestrator | 2025-05-14 02:32:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:32:49.505759 | orchestrator | 2025-05-14 02:32:49 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:32:49.506359 | orchestrator | 2025-05-14 02:32:49 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:32:49.507084 | orchestrator | 2025-05-14 02:32:49 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:32:49.507200 | orchestrator | 2025-05-14 02:32:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:32:52.542383 | orchestrator | 2025-05-14 02:32:52 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:32:52.542781 | orchestrator | 2025-05-14 02:32:52 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:32:52.543935 | orchestrator | 2025-05-14 02:32:52 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:32:52.544013 | orchestrator | 2025-05-14 02:32:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:32:55.578211 | orchestrator | 2025-05-14 02:32:55 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:32:55.581223 | orchestrator | 2025-05-14 02:32:55 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:32:55.581301 | orchestrator | 2025-05-14 02:32:55 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:32:55.581317 | orchestrator | 2025-05-14 02:32:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:32:58.615816 | orchestrator | 2025-05-14 02:32:58 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:32:58.617012 | orchestrator | 2025-05-14 02:32:58 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:32:58.618515 | orchestrator | 2025-05-14 02:32:58 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:32:58.618810 | orchestrator | 2025-05-14 02:32:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:33:01.665989 | orchestrator | 2025-05-14 02:33:01 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:33:01.667436 | orchestrator | 2025-05-14 02:33:01 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:33:01.669691 | orchestrator | 2025-05-14 02:33:01 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:33:01.669897 | orchestrator | 2025-05-14 02:33:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:33:04.713847 | orchestrator | 2025-05-14 02:33:04 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:33:04.717380 | orchestrator | 2025-05-14 02:33:04 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:33:04.717985 | orchestrator | 2025-05-14 02:33:04 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:33:04.718148 | orchestrator | 2025-05-14 02:33:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:33:07.765207 | orchestrator | 2025-05-14 02:33:07 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:33:07.767699 | orchestrator | 2025-05-14 02:33:07 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:33:07.769994 | orchestrator | 2025-05-14 02:33:07 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:33:07.770041 | orchestrator | 2025-05-14 02:33:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:33:10.823427 | orchestrator | 2025-05-14 02:33:10 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:33:10.824419 | orchestrator | 2025-05-14 02:33:10 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:33:10.826484 | orchestrator | 2025-05-14 02:33:10 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:33:10.826682 | orchestrator | 2025-05-14 02:33:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:33:13.876404 | orchestrator | 2025-05-14 02:33:13 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:33:13.878341 | orchestrator | 2025-05-14 02:33:13 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:33:13.880514 | orchestrator | 2025-05-14 02:33:13 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:33:13.880698 | orchestrator | 2025-05-14 02:33:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:33:16.936950 | orchestrator | 2025-05-14 02:33:16 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:33:16.937867 | orchestrator | 2025-05-14 02:33:16 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:33:16.940387 | orchestrator | 2025-05-14 02:33:16 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:33:16.940485 | orchestrator | 2025-05-14 02:33:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:33:19.996278 | orchestrator | 2025-05-14 02:33:19 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:33:19.996406 | orchestrator | 2025-05-14 02:33:19 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:33:19.998984 | orchestrator | 2025-05-14 02:33:19 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:33:19.999142 | orchestrator | 2025-05-14 02:33:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:33:23.054364 | orchestrator | 2025-05-14 02:33:23 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:33:23.055914 | orchestrator | 2025-05-14 02:33:23 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:33:23.057372 | orchestrator | 2025-05-14 02:33:23 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:33:23.057485 | orchestrator | 2025-05-14 02:33:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:33:26.101452 | orchestrator | 2025-05-14 02:33:26 | INFO  | Task dbcfff48-e3e4-4d8e-bdf4-76be0dec86a9 is in state STARTED 2025-05-14 02:33:26.103707 | orchestrator | 2025-05-14 02:33:26 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:33:26.106792 | orchestrator | 2025-05-14 02:33:26 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:33:26.109397 | orchestrator | 2025-05-14 02:33:26 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:33:26.109451 | orchestrator | 2025-05-14 02:33:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:33:29.173198 | orchestrator | 2025-05-14 02:33:29 | INFO  | Task dbcfff48-e3e4-4d8e-bdf4-76be0dec86a9 is in state STARTED 2025-05-14 02:33:29.173952 | orchestrator | 2025-05-14 02:33:29 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:33:29.176083 | orchestrator | 2025-05-14 02:33:29 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:33:29.177121 | orchestrator | 2025-05-14 02:33:29 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:33:29.177173 | orchestrator | 2025-05-14 02:33:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:33:32.228810 | orchestrator | 2025-05-14 02:33:32 | INFO  | Task dbcfff48-e3e4-4d8e-bdf4-76be0dec86a9 is in state STARTED 2025-05-14 02:33:32.234203 | orchestrator | 2025-05-14 02:33:32 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:33:32.235088 | orchestrator | 2025-05-14 02:33:32 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:33:32.236807 | orchestrator | 2025-05-14 02:33:32 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:33:32.237114 | orchestrator | 2025-05-14 02:33:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:33:35.286475 | orchestrator | 2025-05-14 02:33:35 | INFO  | Task dbcfff48-e3e4-4d8e-bdf4-76be0dec86a9 is in state SUCCESS 2025-05-14 02:33:35.286727 | orchestrator | 2025-05-14 02:33:35 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:33:35.289964 | orchestrator | 2025-05-14 02:33:35 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:33:35.292060 | orchestrator | 2025-05-14 02:33:35 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:33:35.293188 | orchestrator | 2025-05-14 02:33:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:33:38.338198 | orchestrator | 2025-05-14 02:33:38 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:33:38.340186 | orchestrator | 2025-05-14 02:33:38 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:33:38.341282 | orchestrator | 2025-05-14 02:33:38 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:33:38.341319 | orchestrator | 2025-05-14 02:33:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:33:41.402254 | orchestrator | 2025-05-14 02:33:41 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:33:41.402343 | orchestrator | 2025-05-14 02:33:41 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:33:41.404111 | orchestrator | 2025-05-14 02:33:41 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:33:41.404160 | orchestrator | 2025-05-14 02:33:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:33:44.448074 | orchestrator | 2025-05-14 02:33:44 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:33:44.451309 | orchestrator | 2025-05-14 02:33:44 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:33:44.452535 | orchestrator | 2025-05-14 02:33:44 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:33:44.452757 | orchestrator | 2025-05-14 02:33:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:33:47.509402 | orchestrator | 2025-05-14 02:33:47 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:33:47.509506 | orchestrator | 2025-05-14 02:33:47 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:33:47.510878 | orchestrator | 2025-05-14 02:33:47 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:33:47.510957 | orchestrator | 2025-05-14 02:33:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:33:50.562225 | orchestrator | 2025-05-14 02:33:50 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:33:50.562319 | orchestrator | 2025-05-14 02:33:50 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:33:50.564128 | orchestrator | 2025-05-14 02:33:50 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:33:50.564217 | orchestrator | 2025-05-14 02:33:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:33:53.610386 | orchestrator | 2025-05-14 02:33:53 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:33:53.612534 | orchestrator | 2025-05-14 02:33:53 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:33:53.615513 | orchestrator | 2025-05-14 02:33:53 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:33:53.615580 | orchestrator | 2025-05-14 02:33:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:33:56.687418 | orchestrator | 2025-05-14 02:33:56 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:33:56.693114 | orchestrator | 2025-05-14 02:33:56 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:33:56.694492 | orchestrator | 2025-05-14 02:33:56 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:33:56.694539 | orchestrator | 2025-05-14 02:33:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:33:59.752221 | orchestrator | 2025-05-14 02:33:59 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:33:59.754062 | orchestrator | 2025-05-14 02:33:59 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:33:59.755126 | orchestrator | 2025-05-14 02:33:59 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:33:59.755176 | orchestrator | 2025-05-14 02:33:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:34:02.797420 | orchestrator | 2025-05-14 02:34:02 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:34:02.799377 | orchestrator | 2025-05-14 02:34:02 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:34:02.802222 | orchestrator | 2025-05-14 02:34:02 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:34:02.802282 | orchestrator | 2025-05-14 02:34:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:34:05.847132 | orchestrator | 2025-05-14 02:34:05 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:34:05.849410 | orchestrator | 2025-05-14 02:34:05 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:34:05.851928 | orchestrator | 2025-05-14 02:34:05 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:34:05.851979 | orchestrator | 2025-05-14 02:34:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:34:08.900356 | orchestrator | 2025-05-14 02:34:08 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:34:08.902291 | orchestrator | 2025-05-14 02:34:08 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:34:08.904167 | orchestrator | 2025-05-14 02:34:08 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:34:08.904268 | orchestrator | 2025-05-14 02:34:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:34:11.939860 | orchestrator | 2025-05-14 02:34:11 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:34:11.940175 | orchestrator | 2025-05-14 02:34:11 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:34:11.944501 | orchestrator | 2025-05-14 02:34:11 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:34:11.944568 | orchestrator | 2025-05-14 02:34:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:34:15.000412 | orchestrator | 2025-05-14 02:34:14 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:34:15.000494 | orchestrator | 2025-05-14 02:34:14 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:34:15.004240 | orchestrator | 2025-05-14 02:34:15 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:34:15.004294 | orchestrator | 2025-05-14 02:34:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:34:18.050852 | orchestrator | 2025-05-14 02:34:18 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:34:18.052588 | orchestrator | 2025-05-14 02:34:18 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:34:18.054008 | orchestrator | 2025-05-14 02:34:18 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:34:18.054116 | orchestrator | 2025-05-14 02:34:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:34:21.104041 | orchestrator | 2025-05-14 02:34:21 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:34:21.105862 | orchestrator | 2025-05-14 02:34:21 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:34:21.109498 | orchestrator | 2025-05-14 02:34:21 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:34:21.109888 | orchestrator | 2025-05-14 02:34:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:34:24.168883 | orchestrator | 2025-05-14 02:34:24 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:34:24.170249 | orchestrator | 2025-05-14 02:34:24 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:34:24.173819 | orchestrator | 2025-05-14 02:34:24 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:34:24.173909 | orchestrator | 2025-05-14 02:34:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:34:27.217386 | orchestrator | 2025-05-14 02:34:27 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:34:27.218486 | orchestrator | 2025-05-14 02:34:27 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:34:27.219719 | orchestrator | 2025-05-14 02:34:27 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:34:27.219787 | orchestrator | 2025-05-14 02:34:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:34:30.284712 | orchestrator | 2025-05-14 02:34:30 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:34:30.286006 | orchestrator | 2025-05-14 02:34:30 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:34:30.290387 | orchestrator | 2025-05-14 02:34:30 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:34:30.290447 | orchestrator | 2025-05-14 02:34:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:34:33.334717 | orchestrator | 2025-05-14 02:34:33 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:34:33.335616 | orchestrator | 2025-05-14 02:34:33 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:34:33.339809 | orchestrator | 2025-05-14 02:34:33 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:34:33.339873 | orchestrator | 2025-05-14 02:34:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:34:36.382549 | orchestrator | 2025-05-14 02:34:36 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:34:36.382805 | orchestrator | 2025-05-14 02:34:36 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:34:36.383164 | orchestrator | 2025-05-14 02:34:36 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:34:36.383198 | orchestrator | 2025-05-14 02:34:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:34:39.419358 | orchestrator | 2025-05-14 02:34:39 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:34:39.420345 | orchestrator | 2025-05-14 02:34:39 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:34:39.420915 | orchestrator | 2025-05-14 02:34:39 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:34:39.420937 | orchestrator | 2025-05-14 02:34:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:34:42.475012 | orchestrator | 2025-05-14 02:34:42 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:34:42.475619 | orchestrator | 2025-05-14 02:34:42 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:34:42.476814 | orchestrator | 2025-05-14 02:34:42 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:34:42.478366 | orchestrator | 2025-05-14 02:34:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:34:45.534487 | orchestrator | 2025-05-14 02:34:45 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:34:45.534811 | orchestrator | 2025-05-14 02:34:45 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:34:45.535911 | orchestrator | 2025-05-14 02:34:45 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:34:45.535935 | orchestrator | 2025-05-14 02:34:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:34:48.593293 | orchestrator | 2025-05-14 02:34:48 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:34:48.593581 | orchestrator | 2025-05-14 02:34:48 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:34:48.595909 | orchestrator | 2025-05-14 02:34:48 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state STARTED 2025-05-14 02:34:48.595999 | orchestrator | 2025-05-14 02:34:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:34:51.655108 | orchestrator | 2025-05-14 02:34:51 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:34:51.657439 | orchestrator | 2025-05-14 02:34:51 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:34:51.658786 | orchestrator | 2025-05-14 02:34:51 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:34:51.660028 | orchestrator | 2025-05-14 02:34:51 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state STARTED 2025-05-14 02:34:51.667228 | orchestrator | 2025-05-14 02:34:51 | INFO  | Task 2d22fff8-f436-4c3b-a4f1-4de61b65985d is in state SUCCESS 2025-05-14 02:34:51.670581 | orchestrator | 2025-05-14 02:34:51.670709 | orchestrator | None 2025-05-14 02:34:51.670726 | orchestrator | 2025-05-14 02:34:51.670739 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:34:51.670751 | orchestrator | 2025-05-14 02:34:51.670762 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:34:51.670774 | orchestrator | Wednesday 14 May 2025 02:27:24 +0000 (0:00:00.359) 0:00:00.359 ********* 2025-05-14 02:34:51.670785 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:34:51.670797 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:34:51.670808 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:34:51.670819 | orchestrator | 2025-05-14 02:34:51.670830 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:34:51.670840 | orchestrator | Wednesday 14 May 2025 02:27:24 +0000 (0:00:00.547) 0:00:00.906 ********* 2025-05-14 02:34:51.670852 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-05-14 02:34:51.670863 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-05-14 02:34:51.670874 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-05-14 02:34:51.670940 | orchestrator | 2025-05-14 02:34:51.670954 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-05-14 02:34:51.671052 | orchestrator | 2025-05-14 02:34:51.671067 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-05-14 02:34:51.671078 | orchestrator | Wednesday 14 May 2025 02:27:24 +0000 (0:00:00.339) 0:00:01.246 ********* 2025-05-14 02:34:51.671089 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:34:51.671100 | orchestrator | 2025-05-14 02:34:51.671111 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-05-14 02:34:51.671122 | orchestrator | Wednesday 14 May 2025 02:27:25 +0000 (0:00:00.938) 0:00:02.184 ********* 2025-05-14 02:34:51.671133 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:34:51.671144 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:34:51.671155 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:34:51.671165 | orchestrator | 2025-05-14 02:34:51.671177 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-05-14 02:34:51.671188 | orchestrator | Wednesday 14 May 2025 02:27:26 +0000 (0:00:00.828) 0:00:03.012 ********* 2025-05-14 02:34:51.671198 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:34:51.671209 | orchestrator | 2025-05-14 02:34:51.671220 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-05-14 02:34:51.671232 | orchestrator | Wednesday 14 May 2025 02:27:27 +0000 (0:00:00.899) 0:00:03.912 ********* 2025-05-14 02:34:51.671243 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:34:51.671253 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:34:51.671264 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:34:51.671301 | orchestrator | 2025-05-14 02:34:51.671314 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-05-14 02:34:51.671324 | orchestrator | Wednesday 14 May 2025 02:27:28 +0000 (0:00:01.094) 0:00:05.007 ********* 2025-05-14 02:34:51.671335 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-14 02:34:51.671346 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-14 02:34:51.671357 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-14 02:34:51.671367 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-14 02:34:51.671378 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-14 02:34:51.671389 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-14 02:34:51.671401 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-14 02:34:51.671412 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-14 02:34:51.671422 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-14 02:34:51.671433 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-14 02:34:51.671445 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-14 02:34:51.671470 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-14 02:34:51.671481 | orchestrator | 2025-05-14 02:34:51.671525 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-14 02:34:51.671538 | orchestrator | Wednesday 14 May 2025 02:27:32 +0000 (0:00:03.835) 0:00:08.842 ********* 2025-05-14 02:34:51.671550 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-05-14 02:34:51.671561 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-05-14 02:34:51.671572 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-05-14 02:34:51.671582 | orchestrator | 2025-05-14 02:34:51.671593 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-14 02:34:51.671604 | orchestrator | Wednesday 14 May 2025 02:27:34 +0000 (0:00:01.705) 0:00:10.547 ********* 2025-05-14 02:34:51.671615 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-05-14 02:34:51.671729 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-05-14 02:34:51.671743 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-05-14 02:34:51.671754 | orchestrator | 2025-05-14 02:34:51.671764 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-14 02:34:51.671775 | orchestrator | Wednesday 14 May 2025 02:27:36 +0000 (0:00:02.171) 0:00:12.718 ********* 2025-05-14 02:34:51.671787 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-05-14 02:34:51.671798 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.671860 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-05-14 02:34:51.671875 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.671886 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-05-14 02:34:51.671897 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.671908 | orchestrator | 2025-05-14 02:34:51.671919 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-05-14 02:34:51.671973 | orchestrator | Wednesday 14 May 2025 02:27:37 +0000 (0:00:01.179) 0:00:13.898 ********* 2025-05-14 02:34:51.671988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-14 02:34:51.672017 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-14 02:34:51.672029 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-14 02:34:51.672042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-14 02:34:51.672088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-14 02:34:51.672112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d526c95eb522090bb392eea8147c6968711248d0', '__omit_place_holder__d526c95eb522090bb392eea8147c6968711248d0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-14 02:34:51.672125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-14 02:34:51.672144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-14 02:34:51.672156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-14 02:34:51.672168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d526c95eb522090bb392eea8147c6968711248d0', '__omit_place_holder__d526c95eb522090bb392eea8147c6968711248d0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-14 02:34:51.672180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-14 02:34:51.672197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d526c95eb522090bb392eea8147c6968711248d0', '__omit_place_holder__d526c95eb522090bb392eea8147c6968711248d0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-14 02:34:51.672208 | orchestrator | 2025-05-14 02:34:51.672220 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-05-14 02:34:51.672231 | orchestrator | Wednesday 14 May 2025 02:27:41 +0000 (0:00:03.650) 0:00:17.549 ********* 2025-05-14 02:34:51.672242 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:34:51.672253 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:34:51.672264 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:34:51.672275 | orchestrator | 2025-05-14 02:34:51.672292 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-05-14 02:34:51.672304 | orchestrator | Wednesday 14 May 2025 02:27:42 +0000 (0:00:01.784) 0:00:19.333 ********* 2025-05-14 02:34:51.672322 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-05-14 02:34:51.672386 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-05-14 02:34:51.672398 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-05-14 02:34:51.672410 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-05-14 02:34:51.672420 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-05-14 02:34:51.672431 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-05-14 02:34:51.672442 | orchestrator | 2025-05-14 02:34:51.672453 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-05-14 02:34:51.672464 | orchestrator | Wednesday 14 May 2025 02:27:47 +0000 (0:00:04.023) 0:00:23.357 ********* 2025-05-14 02:34:51.672475 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:34:51.672486 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:34:51.672497 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:34:51.672507 | orchestrator | 2025-05-14 02:34:51.672519 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-05-14 02:34:51.672530 | orchestrator | Wednesday 14 May 2025 02:27:50 +0000 (0:00:03.871) 0:00:27.229 ********* 2025-05-14 02:34:51.672565 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:34:51.672577 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:34:51.672588 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:34:51.672599 | orchestrator | 2025-05-14 02:34:51.672610 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-05-14 02:34:51.672671 | orchestrator | Wednesday 14 May 2025 02:27:53 +0000 (0:00:02.284) 0:00:29.513 ********* 2025-05-14 02:34:51.672687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-14 02:34:51.672700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-14 02:34:51.672748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-14 02:34:51.672762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-14 02:34:51.672792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-14 02:34:51.672806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-14 02:34:51.672818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-14 02:34:51.672830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-14 02:34:51.672923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-14 02:34:51.672941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d526c95eb522090bb392eea8147c6968711248d0', '__omit_place_holder__d526c95eb522090bb392eea8147c6968711248d0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-14 02:34:51.672960 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.672972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d526c95eb522090bb392eea8147c6968711248d0', '__omit_place_holder__d526c95eb522090bb392eea8147c6968711248d0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-14 02:34:51.672983 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.673003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d526c95eb522090bb392eea8147c6968711248d0', '__omit_place_holder__d526c95eb522090bb392eea8147c6968711248d0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-14 02:34:51.673015 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.673026 | orchestrator | 2025-05-14 02:34:51.673037 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-05-14 02:34:51.673048 | orchestrator | Wednesday 14 May 2025 02:27:54 +0000 (0:00:01.730) 0:00:31.243 ********* 2025-05-14 02:34:51.673060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-14 02:34:51.673073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-14 02:34:51.673085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-14 02:34:51.673101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-14 02:34:51.673126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-14 02:34:51.673139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-14 02:34:51.673151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-14 02:34:51.673162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d526c95eb522090bb392eea8147c6968711248d0', '__omit_place_holder__d526c95eb522090bb392eea8147c6968711248d0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-14 02:34:51.673173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-14 02:34:51.673191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d526c95eb522090bb392eea8147c6968711248d0', '__omit_place_holder__d526c95eb522090bb392eea8147c6968711248d0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-14 02:34:51.673215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-14 02:34:51.673237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d526c95eb522090bb392eea8147c6968711248d0', '__omit_place_holder__d526c95eb522090bb392eea8147c6968711248d0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-14 02:34:51.673249 | orchestrator | 2025-05-14 02:34:51.673260 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-05-14 02:34:51.673271 | orchestrator | Wednesday 14 May 2025 02:28:00 +0000 (0:00:05.583) 0:00:36.827 ********* 2025-05-14 02:34:51.673283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-14 02:34:51.673295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-14 02:34:51.673306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-14 02:34:51.673318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-14 02:34:51.673389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-14 02:34:51.674605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-14 02:34:51.674699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-14 02:34:51.674712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d526c95eb522090bb392eea8147c6968711248d0', '__omit_place_holder__d526c95eb522090bb392eea8147c6968711248d0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-14 02:34:51.674723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-14 02:34:51.674733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d526c95eb522090bb392eea8147c6968711248d0', '__omit_place_holder__d526c95eb522090bb392eea8147c6968711248d0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-14 02:34:51.674763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-14 02:34:51.674774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d526c95eb522090bb392eea8147c6968711248d0', '__omit_place_holder__d526c95eb522090bb392eea8147c6968711248d0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-14 02:34:51.674786 | orchestrator | 2025-05-14 02:34:51.674803 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-05-14 02:34:51.674829 | orchestrator | Wednesday 14 May 2025 02:28:03 +0000 (0:00:03.250) 0:00:40.077 ********* 2025-05-14 02:34:51.674895 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-14 02:34:51.674926 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-14 02:34:51.674936 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-14 02:34:51.674972 | orchestrator | 2025-05-14 02:34:51.674983 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-05-14 02:34:51.674993 | orchestrator | Wednesday 14 May 2025 02:28:05 +0000 (0:00:02.130) 0:00:42.207 ********* 2025-05-14 02:34:51.675002 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-14 02:34:51.675012 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-14 02:34:51.675047 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-14 02:34:51.675057 | orchestrator | 2025-05-14 02:34:51.675067 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-05-14 02:34:51.675076 | orchestrator | Wednesday 14 May 2025 02:28:11 +0000 (0:00:05.847) 0:00:48.055 ********* 2025-05-14 02:34:51.675086 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.675096 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.675106 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.675115 | orchestrator | 2025-05-14 02:34:51.675125 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-05-14 02:34:51.675134 | orchestrator | Wednesday 14 May 2025 02:28:12 +0000 (0:00:01.149) 0:00:49.205 ********* 2025-05-14 02:34:51.675144 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-14 02:34:51.675155 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-14 02:34:51.675166 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-14 02:34:51.675187 | orchestrator | 2025-05-14 02:34:51.675199 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-05-14 02:34:51.675210 | orchestrator | Wednesday 14 May 2025 02:28:15 +0000 (0:00:03.115) 0:00:52.321 ********* 2025-05-14 02:34:51.675221 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-14 02:34:51.675232 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-14 02:34:51.675242 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-14 02:34:51.675252 | orchestrator | 2025-05-14 02:34:51.675261 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-05-14 02:34:51.675271 | orchestrator | Wednesday 14 May 2025 02:28:18 +0000 (0:00:02.148) 0:00:54.470 ********* 2025-05-14 02:34:51.675280 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-05-14 02:34:51.675290 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-05-14 02:34:51.675299 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-05-14 02:34:51.675308 | orchestrator | 2025-05-14 02:34:51.675318 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-05-14 02:34:51.675327 | orchestrator | Wednesday 14 May 2025 02:28:20 +0000 (0:00:02.799) 0:00:57.269 ********* 2025-05-14 02:34:51.675337 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-05-14 02:34:51.675346 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-05-14 02:34:51.675355 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-05-14 02:34:51.675365 | orchestrator | 2025-05-14 02:34:51.675374 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-05-14 02:34:51.675384 | orchestrator | Wednesday 14 May 2025 02:28:23 +0000 (0:00:02.303) 0:00:59.572 ********* 2025-05-14 02:34:51.675399 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:34:51.675409 | orchestrator | 2025-05-14 02:34:51.675419 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-05-14 02:34:51.675428 | orchestrator | Wednesday 14 May 2025 02:28:24 +0000 (0:00:00.787) 0:01:00.360 ********* 2025-05-14 02:34:51.675438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-14 02:34:51.675457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-14 02:34:51.675468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-14 02:34:51.675483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-14 02:34:51.675493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-14 02:34:51.675503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-14 02:34:51.675518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-14 02:34:51.675535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-14 02:34:51.675545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-14 02:34:51.675561 | orchestrator | 2025-05-14 02:34:51.675571 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-05-14 02:34:51.675581 | orchestrator | Wednesday 14 May 2025 02:28:27 +0000 (0:00:03.596) 0:01:03.956 ********* 2025-05-14 02:34:51.675591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-14 02:34:51.675601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-14 02:34:51.675611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-14 02:34:51.675651 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.675668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-14 02:34:51.675679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-14 02:34:51.675696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-14 02:34:51.675713 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.675723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-14 02:34:51.675733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-14 02:34:51.675744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-14 02:34:51.675753 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.675763 | orchestrator | 2025-05-14 02:34:51.675773 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-05-14 02:34:51.675782 | orchestrator | Wednesday 14 May 2025 02:28:28 +0000 (0:00:00.805) 0:01:04.761 ********* 2025-05-14 02:34:51.675792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-14 02:34:51.675807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-14 02:34:51.675822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-14 02:34:51.675839 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.675850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-14 02:34:51.675860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-14 02:34:51.675870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-14 02:34:51.675880 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.675890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-14 02:34:51.675909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-14 02:34:51.675919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-14 02:34:51.675936 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.675945 | orchestrator | 2025-05-14 02:34:51.675955 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-05-14 02:34:51.675970 | orchestrator | Wednesday 14 May 2025 02:28:29 +0000 (0:00:01.306) 0:01:06.068 ********* 2025-05-14 02:34:51.675980 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-14 02:34:51.675990 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-14 02:34:51.676000 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-14 02:34:51.676009 | orchestrator | 2025-05-14 02:34:51.676019 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-05-14 02:34:51.676028 | orchestrator | Wednesday 14 May 2025 02:28:31 +0000 (0:00:02.101) 0:01:08.170 ********* 2025-05-14 02:34:51.676038 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-14 02:34:51.676047 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-14 02:34:51.676057 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-14 02:34:51.676066 | orchestrator | 2025-05-14 02:34:51.676076 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-05-14 02:34:51.676085 | orchestrator | Wednesday 14 May 2025 02:28:34 +0000 (0:00:02.405) 0:01:10.576 ********* 2025-05-14 02:34:51.676095 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-14 02:34:51.676105 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-14 02:34:51.676114 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-14 02:34:51.676124 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-14 02:34:51.676133 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.676143 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-14 02:34:51.676152 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.676162 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-14 02:34:51.676171 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.676181 | orchestrator | 2025-05-14 02:34:51.676190 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-05-14 02:34:51.676200 | orchestrator | Wednesday 14 May 2025 02:28:37 +0000 (0:00:03.006) 0:01:13.582 ********* 2025-05-14 02:34:51.676210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-14 02:34:51.676225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-14 02:34:51.676246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-14 02:34:51.676262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-14 02:34:51.676273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-14 02:34:51.676283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-14 02:34:51.676293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-14 02:34:51.676303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d526c95eb522090bb392eea8147c6968711248d0', '__omit_place_holder__d526c95eb522090bb392eea8147c6968711248d0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-14 02:34:51.676319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-14 02:34:51.676335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d526c95eb522090bb392eea8147c6968711248d0', '__omit_place_holder__d526c95eb522090bb392eea8147c6968711248d0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-14 02:34:51.676346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-14 02:34:51.676356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d526c95eb522090bb392eea8147c6968711248d0', '__omit_place_holder__d526c95eb522090bb392eea8147c6968711248d0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-14 02:34:51.676366 | orchestrator | 2025-05-14 02:34:51.676406 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-05-14 02:34:51.676417 | orchestrator | Wednesday 14 May 2025 02:28:40 +0000 (0:00:02.842) 0:01:16.424 ********* 2025-05-14 02:34:51.676427 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:34:51.676437 | orchestrator | 2025-05-14 02:34:51.676446 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-05-14 02:34:51.676456 | orchestrator | Wednesday 14 May 2025 02:28:40 +0000 (0:00:00.688) 0:01:17.112 ********* 2025-05-14 02:34:51.676467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-14 02:34:51.676489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-14 02:34:51.676500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.676518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.676529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-14 02:34:51.676539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-14 02:34:51.676550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-14 02:34:51.676570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-14 02:34:51.676580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.676596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.676606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.676617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.676654 | orchestrator | 2025-05-14 02:34:51.676664 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-05-14 02:34:51.676674 | orchestrator | Wednesday 14 May 2025 02:28:44 +0000 (0:00:03.399) 0:01:20.512 ********* 2025-05-14 02:34:51.676684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-14 02:34:51.676701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-14 02:34:51.676716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.676733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.676743 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.676754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-14 02:34:51.676764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-14 02:34:51.676774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.676790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.676800 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.676815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-14 02:34:51.676832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-14 02:34:51.676842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.676852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.676862 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.676871 | orchestrator | 2025-05-14 02:34:51.676881 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-05-14 02:34:51.676891 | orchestrator | Wednesday 14 May 2025 02:28:44 +0000 (0:00:00.768) 0:01:21.280 ********* 2025-05-14 02:34:51.676907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-14 02:34:51.676919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-14 02:34:51.676929 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.676939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-14 02:34:51.676948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-14 02:34:51.676958 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.676968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-14 02:34:51.676977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-14 02:34:51.676987 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.676997 | orchestrator | 2025-05-14 02:34:51.677007 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-05-14 02:34:51.677017 | orchestrator | Wednesday 14 May 2025 02:28:46 +0000 (0:00:01.106) 0:01:22.387 ********* 2025-05-14 02:34:51.677027 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:34:51.677036 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:34:51.677055 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:34:51.677064 | orchestrator | 2025-05-14 02:34:51.677074 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-05-14 02:34:51.677084 | orchestrator | Wednesday 14 May 2025 02:28:47 +0000 (0:00:01.269) 0:01:23.657 ********* 2025-05-14 02:34:51.677093 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:34:51.677103 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:34:51.677112 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:34:51.677122 | orchestrator | 2025-05-14 02:34:51.677132 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-05-14 02:34:51.677141 | orchestrator | Wednesday 14 May 2025 02:28:49 +0000 (0:00:01.883) 0:01:25.541 ********* 2025-05-14 02:34:51.677151 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:34:51.677160 | orchestrator | 2025-05-14 02:34:51.677170 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-05-14 02:34:51.677179 | orchestrator | Wednesday 14 May 2025 02:28:50 +0000 (0:00:01.257) 0:01:26.798 ********* 2025-05-14 02:34:51.677198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 02:34:51.677216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.677227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.677237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 02:34:51.677252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.677268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.677279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 02:34:51.677296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.677306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.677316 | orchestrator | 2025-05-14 02:34:51.677325 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-05-14 02:34:51.677335 | orchestrator | Wednesday 14 May 2025 02:28:56 +0000 (0:00:05.672) 0:01:32.470 ********* 2025-05-14 02:34:51.677349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-14 02:34:51.677366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.677382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.677392 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.677403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-14 02:34:51.677413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.677429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.677439 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.678385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-14 02:34:51.678460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.678472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.678482 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.678492 | orchestrator | 2025-05-14 02:34:51.678502 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-05-14 02:34:51.678512 | orchestrator | Wednesday 14 May 2025 02:28:56 +0000 (0:00:00.730) 0:01:33.201 ********* 2025-05-14 02:34:51.678522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-14 02:34:51.678532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-14 02:34:51.678543 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.678553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-14 02:34:51.678562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-14 02:34:51.678572 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.678582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-14 02:34:51.678597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-14 02:34:51.678607 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.678617 | orchestrator | 2025-05-14 02:34:51.678659 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-05-14 02:34:51.678713 | orchestrator | Wednesday 14 May 2025 02:28:58 +0000 (0:00:01.222) 0:01:34.424 ********* 2025-05-14 02:34:51.678731 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:34:51.678742 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:34:51.678752 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:34:51.678761 | orchestrator | 2025-05-14 02:34:51.678771 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-05-14 02:34:51.678789 | orchestrator | Wednesday 14 May 2025 02:28:59 +0000 (0:00:01.431) 0:01:35.855 ********* 2025-05-14 02:34:51.678799 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:34:51.678808 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:34:51.678818 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:34:51.678827 | orchestrator | 2025-05-14 02:34:51.678836 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-05-14 02:34:51.678846 | orchestrator | Wednesday 14 May 2025 02:29:02 +0000 (0:00:02.551) 0:01:38.406 ********* 2025-05-14 02:34:51.678856 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.678867 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.678878 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.678890 | orchestrator | 2025-05-14 02:34:51.678910 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-05-14 02:34:51.678922 | orchestrator | Wednesday 14 May 2025 02:29:02 +0000 (0:00:00.289) 0:01:38.696 ********* 2025-05-14 02:34:51.678934 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:34:51.678944 | orchestrator | 2025-05-14 02:34:51.678955 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-05-14 02:34:51.678966 | orchestrator | Wednesday 14 May 2025 02:29:03 +0000 (0:00:00.972) 0:01:39.669 ********* 2025-05-14 02:34:51.679014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-14 02:34:51.679028 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-14 02:34:51.679039 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-14 02:34:51.679050 | orchestrator | 2025-05-14 02:34:51.679059 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-05-14 02:34:51.679075 | orchestrator | Wednesday 14 May 2025 02:29:06 +0000 (0:00:02.950) 0:01:42.620 ********* 2025-05-14 02:34:51.679091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-14 02:34:51.679101 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.679118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-14 02:34:51.679128 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.679138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-14 02:34:51.679148 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.679158 | orchestrator | 2025-05-14 02:34:51.679167 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-05-14 02:34:51.679177 | orchestrator | Wednesday 14 May 2025 02:29:07 +0000 (0:00:01.706) 0:01:44.326 ********* 2025-05-14 02:34:51.679200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-14 02:34:51.679212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-14 02:34:51.679231 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.679241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-14 02:34:51.679267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-14 02:34:51.679278 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.679288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-14 02:34:51.679313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-14 02:34:51.679324 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.679334 | orchestrator | 2025-05-14 02:34:51.679343 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-05-14 02:34:51.679353 | orchestrator | Wednesday 14 May 2025 02:29:10 +0000 (0:00:02.509) 0:01:46.835 ********* 2025-05-14 02:34:51.679362 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.679372 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.679381 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.679391 | orchestrator | 2025-05-14 02:34:51.679400 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-05-14 02:34:51.679410 | orchestrator | Wednesday 14 May 2025 02:29:11 +0000 (0:00:00.808) 0:01:47.644 ********* 2025-05-14 02:34:51.679419 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.679472 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.679483 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.679492 | orchestrator | 2025-05-14 02:34:51.679502 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-05-14 02:34:51.679511 | orchestrator | Wednesday 14 May 2025 02:29:12 +0000 (0:00:01.330) 0:01:48.974 ********* 2025-05-14 02:34:51.679521 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:34:51.679530 | orchestrator | 2025-05-14 02:34:51.679539 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-05-14 02:34:51.679549 | orchestrator | Wednesday 14 May 2025 02:29:13 +0000 (0:00:00.984) 0:01:49.959 ********* 2025-05-14 02:34:51.679559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 02:34:51.679589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.679604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.679649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.679663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 02:34:51.679673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.679692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.679711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.679727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 02:34:51.679738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.679748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.679764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.679774 | orchestrator | 2025-05-14 02:34:51.679784 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-05-14 02:34:51.679794 | orchestrator | Wednesday 14 May 2025 02:29:17 +0000 (0:00:03.785) 0:01:53.745 ********* 2025-05-14 02:34:51.679808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:34:51.679819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.679835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.679845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.679861 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.679871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:34:51.679881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.679896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.679912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.679922 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.679932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:34:51.679949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.679959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.679974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.679984 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.679994 | orchestrator | 2025-05-14 02:34:51.680003 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-05-14 02:34:51.680013 | orchestrator | Wednesday 14 May 2025 02:29:18 +0000 (0:00:00.848) 0:01:54.594 ********* 2025-05-14 02:34:51.680023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-14 02:34:51.680038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-14 02:34:51.680049 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.680058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-14 02:34:51.680068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-14 02:34:51.680084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-14 02:34:51.680094 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.680104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-14 02:34:51.680113 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.680123 | orchestrator | 2025-05-14 02:34:51.680132 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-05-14 02:34:51.680142 | orchestrator | Wednesday 14 May 2025 02:29:19 +0000 (0:00:01.005) 0:01:55.599 ********* 2025-05-14 02:34:51.680151 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:34:51.680161 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:34:51.680171 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:34:51.680181 | orchestrator | 2025-05-14 02:34:51.680191 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-05-14 02:34:51.680200 | orchestrator | Wednesday 14 May 2025 02:29:20 +0000 (0:00:01.641) 0:01:57.240 ********* 2025-05-14 02:34:51.680226 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:34:51.680246 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:34:51.680256 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:34:51.680265 | orchestrator | 2025-05-14 02:34:51.680275 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-05-14 02:34:51.680284 | orchestrator | Wednesday 14 May 2025 02:29:23 +0000 (0:00:02.167) 0:01:59.408 ********* 2025-05-14 02:34:51.680294 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.680303 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.680313 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.680322 | orchestrator | 2025-05-14 02:34:51.680331 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-05-14 02:34:51.680341 | orchestrator | Wednesday 14 May 2025 02:29:23 +0000 (0:00:00.282) 0:01:59.691 ********* 2025-05-14 02:34:51.680350 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.680360 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.680369 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.680378 | orchestrator | 2025-05-14 02:34:51.680388 | orchestrator | TASK [include_role : designate] ************************************************ 2025-05-14 02:34:51.680397 | orchestrator | Wednesday 14 May 2025 02:29:23 +0000 (0:00:00.496) 0:02:00.187 ********* 2025-05-14 02:34:51.680407 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:34:51.680416 | orchestrator | 2025-05-14 02:34:51.680426 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-05-14 02:34:51.680435 | orchestrator | Wednesday 14 May 2025 02:29:24 +0000 (0:00:01.075) 0:02:01.263 ********* 2025-05-14 02:34:51.680450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 02:34:51.680466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-14 02:34:51.680483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.680493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.680504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.680514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.680528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.680539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 02:34:51.680560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-14 02:34:51.680571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 02:34:51.680581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.680591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-14 02:34:51.680601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.680612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.680659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.680696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.680707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.680717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.680728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.680743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.680764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.680774 | orchestrator | 2025-05-14 02:34:51.680789 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-05-14 02:34:51.680800 | orchestrator | Wednesday 14 May 2025 02:29:31 +0000 (0:00:06.125) 0:02:07.389 ********* 2025-05-14 02:34:51.680810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 02:34:51.680820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-14 02:34:51.680830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.680841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.680855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.680876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.680887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.680897 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.680907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 02:34:51.680917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-14 02:34:51.680927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.680947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.680957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.680972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.680982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.680993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 02:34:51.681003 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.681013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-14 02:34:51.681033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.681044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.681059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.681070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.681080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.681090 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.681100 | orchestrator | 2025-05-14 02:34:51.681109 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-05-14 02:34:51.681119 | orchestrator | Wednesday 14 May 2025 02:29:32 +0000 (0:00:01.118) 0:02:08.507 ********* 2025-05-14 02:34:51.681129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-14 02:34:51.681138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-14 02:34:51.681155 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.681165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-14 02:34:51.681175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-14 02:34:51.681185 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.681195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-14 02:34:51.681209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-14 02:34:51.681219 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.681228 | orchestrator | 2025-05-14 02:34:51.681238 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-05-14 02:34:51.681248 | orchestrator | Wednesday 14 May 2025 02:29:33 +0000 (0:00:01.364) 0:02:09.871 ********* 2025-05-14 02:34:51.681258 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:34:51.681267 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:34:51.681276 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:34:51.681286 | orchestrator | 2025-05-14 02:34:51.681295 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-05-14 02:34:51.681305 | orchestrator | Wednesday 14 May 2025 02:29:34 +0000 (0:00:01.169) 0:02:11.041 ********* 2025-05-14 02:34:51.681314 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:34:51.681324 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:34:51.681333 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:34:51.681343 | orchestrator | 2025-05-14 02:34:51.681352 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-05-14 02:34:51.681362 | orchestrator | Wednesday 14 May 2025 02:29:37 +0000 (0:00:02.359) 0:02:13.400 ********* 2025-05-14 02:34:51.681371 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.681381 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.681390 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.681400 | orchestrator | 2025-05-14 02:34:51.681409 | orchestrator | TASK [include_role : glance] *************************************************** 2025-05-14 02:34:51.681908 | orchestrator | Wednesday 14 May 2025 02:29:37 +0000 (0:00:00.451) 0:02:13.852 ********* 2025-05-14 02:34:51.681931 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:34:51.681941 | orchestrator | 2025-05-14 02:34:51.681951 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-05-14 02:34:51.681960 | orchestrator | Wednesday 14 May 2025 02:29:38 +0000 (0:00:01.054) 0:02:14.907 ********* 2025-05-14 02:34:51.681972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-14 02:34:51.682000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-14 02:34:51.682069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-14 02:34:51.682096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-14 02:34:51.682115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-14 02:34:51.682127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-14 02:34:51.682144 | orchestrator | 2025-05-14 02:34:51.682154 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-05-14 02:34:51.682168 | orchestrator | Wednesday 14 May 2025 02:29:43 +0000 (0:00:05.404) 0:02:20.311 ********* 2025-05-14 02:34:51.682260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-14 02:34:51.682325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-14 02:34:51.682368 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.682392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-14 02:34:51.682404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-14 02:34:51.682421 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.682437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-14 02:34:51.682455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-14 02:34:51.682473 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.682483 | orchestrator | 2025-05-14 02:34:51.682493 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-05-14 02:34:51.682502 | orchestrator | Wednesday 14 May 2025 02:29:49 +0000 (0:00:05.888) 0:02:26.200 ********* 2025-05-14 02:34:51.682512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-14 02:34:51.682523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-14 02:34:51.682534 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.682548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-14 02:34:51.682563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-14 02:34:51.682574 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.682584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-14 02:34:51.682600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-14 02:34:51.682610 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.682676 | orchestrator | 2025-05-14 02:34:51.682694 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-05-14 02:34:51.682710 | orchestrator | Wednesday 14 May 2025 02:29:55 +0000 (0:00:05.915) 0:02:32.116 ********* 2025-05-14 02:34:51.682771 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:34:51.682782 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:34:51.682791 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:34:51.682801 | orchestrator | 2025-05-14 02:34:51.682811 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-05-14 02:34:51.682820 | orchestrator | Wednesday 14 May 2025 02:29:56 +0000 (0:00:01.214) 0:02:33.330 ********* 2025-05-14 02:34:51.682829 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:34:51.682839 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:34:51.682848 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:34:51.682858 | orchestrator | 2025-05-14 02:34:51.682867 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-05-14 02:34:51.682877 | orchestrator | Wednesday 14 May 2025 02:29:59 +0000 (0:00:02.166) 0:02:35.497 ********* 2025-05-14 02:34:51.682886 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.682896 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.682905 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.682914 | orchestrator | 2025-05-14 02:34:51.682924 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-05-14 02:34:51.682933 | orchestrator | Wednesday 14 May 2025 02:29:59 +0000 (0:00:00.497) 0:02:35.995 ********* 2025-05-14 02:34:51.682943 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:34:51.682952 | orchestrator | 2025-05-14 02:34:51.682961 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-05-14 02:34:51.682971 | orchestrator | Wednesday 14 May 2025 02:30:00 +0000 (0:00:01.299) 0:02:37.294 ********* 2025-05-14 02:34:51.682981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 02:34:51.682998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 02:34:51.683023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 02:34:51.683034 | orchestrator | 2025-05-14 02:34:51.683043 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-05-14 02:34:51.683053 | orchestrator | Wednesday 14 May 2025 02:30:05 +0000 (0:00:04.989) 0:02:42.284 ********* 2025-05-14 02:34:51.683063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-14 02:34:51.683073 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.683083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-14 02:34:51.683094 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.683103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-14 02:34:51.683113 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.683123 | orchestrator | 2025-05-14 02:34:51.683132 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-05-14 02:34:51.683141 | orchestrator | Wednesday 14 May 2025 02:30:06 +0000 (0:00:00.604) 0:02:42.889 ********* 2025-05-14 02:34:51.683151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-14 02:34:51.683165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-14 02:34:51.683176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-14 02:34:51.683196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-14 02:34:51.683205 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.683215 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.683225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-14 02:34:51.683239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-14 02:34:51.683249 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.683259 | orchestrator | 2025-05-14 02:34:51.683268 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-05-14 02:34:51.683278 | orchestrator | Wednesday 14 May 2025 02:30:07 +0000 (0:00:01.103) 0:02:43.992 ********* 2025-05-14 02:34:51.683287 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:34:51.683297 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:34:51.683307 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:34:51.683316 | orchestrator | 2025-05-14 02:34:51.683326 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-05-14 02:34:51.683335 | orchestrator | Wednesday 14 May 2025 02:30:08 +0000 (0:00:01.201) 0:02:45.193 ********* 2025-05-14 02:34:51.683345 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:34:51.683354 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:34:51.683363 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:34:51.683373 | orchestrator | 2025-05-14 02:34:51.683382 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-05-14 02:34:51.683392 | orchestrator | Wednesday 14 May 2025 02:30:11 +0000 (0:00:02.412) 0:02:47.605 ********* 2025-05-14 02:34:51.683401 | orchestrator | included: heat for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:34:51.683411 | orchestrator | 2025-05-14 02:34:51.683420 | orchestrator | TASK [haproxy-config : Copying over heat haproxy config] *********************** 2025-05-14 02:34:51.683430 | orchestrator | Wednesday 14 May 2025 02:30:12 +0000 (0:00:01.371) 0:02:48.977 ********* 2025-05-14 02:34:51.683440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-05-14 02:34:51.683450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-05-14 02:34:51.683471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-05-14 02:34:51.683488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-05-14 02:34:51.683499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.683509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-05-14 02:34:51.683520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.683551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-05-14 02:34:51.683562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.683572 | orchestrator | 2025-05-14 02:34:51.683587 | orchestrator | TASK [haproxy-config : Add configuration for heat when using single external frontend] *** 2025-05-14 02:34:51.683597 | orchestrator | Wednesday 14 May 2025 02:30:19 +0000 (0:00:07.067) 0:02:56.045 ********* 2025-05-14 02:34:51.683607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-05-14 02:34:51.683618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-05-14 02:34:51.683654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.683674 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.683689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-05-14 02:34:51.683705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-05-14 02:34:51.683716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.683726 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.683736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-05-14 02:34:51.683747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-05-14 02:34:51.683767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.683778 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.683787 | orchestrator | 2025-05-14 02:34:51.683797 | orchestrator | TASK [haproxy-config : Configuring firewall for heat] ************************** 2025-05-14 02:34:51.683807 | orchestrator | Wednesday 14 May 2025 02:30:20 +0000 (0:00:00.904) 0:02:56.949 ********* 2025-05-14 02:34:51.683817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-14 02:34:51.683827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-14 02:34:51.683837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-14 02:34:51.683853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-14 02:34:51.683863 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.683873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-14 02:34:51.683882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-14 02:34:51.683892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-14 02:34:51.683902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-14 02:34:51.683912 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.683921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-14 02:34:51.683931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-14 02:34:51.683946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-14 02:34:51.683956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-14 02:34:51.683966 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.683976 | orchestrator | 2025-05-14 02:34:51.683985 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL users config] *************** 2025-05-14 02:34:51.683995 | orchestrator | Wednesday 14 May 2025 02:30:22 +0000 (0:00:01.489) 0:02:58.439 ********* 2025-05-14 02:34:51.684004 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:34:51.684014 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:34:51.684024 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:34:51.684033 | orchestrator | 2025-05-14 02:34:51.684043 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL rules config] *************** 2025-05-14 02:34:51.684052 | orchestrator | Wednesday 14 May 2025 02:30:23 +0000 (0:00:01.364) 0:02:59.804 ********* 2025-05-14 02:34:51.684062 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:34:51.684071 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:34:51.684080 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:34:51.684090 | orchestrator | 2025-05-14 02:34:51.684099 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-05-14 02:34:51.684109 | orchestrator | Wednesday 14 May 2025 02:30:26 +0000 (0:00:02.592) 0:03:02.397 ********* 2025-05-14 02:34:51.684118 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:34:51.684128 | orchestrator | 2025-05-14 02:34:51.684137 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-05-14 02:34:51.684146 | orchestrator | Wednesday 14 May 2025 02:30:27 +0000 (0:00:01.154) 0:03:03.551 ********* 2025-05-14 02:34:51.684169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-14 02:34:51.684192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-14 02:34:51.684213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-14 02:34:51.684230 | orchestrator | 2025-05-14 02:34:51.684240 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-05-14 02:34:51.684249 | orchestrator | Wednesday 14 May 2025 02:30:31 +0000 (0:00:04.591) 0:03:08.142 ********* 2025-05-14 02:34:51.684266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-14 02:34:51.684277 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.684601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-14 02:34:51.684690 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.684704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-14 02:34:51.684714 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.684724 | orchestrator | 2025-05-14 02:34:51.684732 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-05-14 02:34:51.684740 | orchestrator | Wednesday 14 May 2025 02:30:32 +0000 (0:00:01.034) 0:03:09.177 ********* 2025-05-14 02:34:51.684748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-14 02:34:51.684769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-14 02:34:51.684780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-14 02:34:51.684788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-14 02:34:51.684797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-14 02:34:51.684805 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.684836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-14 02:34:51.684845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-14 02:34:51.684853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-14 02:34:51.684861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-14 02:34:51.684873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-14 02:34:51.684882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-14 02:34:51.684890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-14 02:34:51.684898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-14 02:34:51.684911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-14 02:34:51.684919 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.684927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-14 02:34:51.684935 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.684994 | orchestrator | 2025-05-14 02:34:51.685015 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-05-14 02:34:51.685024 | orchestrator | Wednesday 14 May 2025 02:30:34 +0000 (0:00:01.277) 0:03:10.455 ********* 2025-05-14 02:34:51.685032 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:34:51.685050 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:34:51.685059 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:34:51.685067 | orchestrator | 2025-05-14 02:34:51.685075 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-05-14 02:34:51.685083 | orchestrator | Wednesday 14 May 2025 02:30:35 +0000 (0:00:01.300) 0:03:11.756 ********* 2025-05-14 02:34:51.685090 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:34:51.685098 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:34:51.685126 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:34:51.685134 | orchestrator | 2025-05-14 02:34:51.685142 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-05-14 02:34:51.685158 | orchestrator | Wednesday 14 May 2025 02:30:37 +0000 (0:00:02.046) 0:03:13.803 ********* 2025-05-14 02:34:51.685166 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.685174 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.685182 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.685189 | orchestrator | 2025-05-14 02:34:51.685216 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-05-14 02:34:51.685225 | orchestrator | Wednesday 14 May 2025 02:30:37 +0000 (0:00:00.419) 0:03:14.222 ********* 2025-05-14 02:34:51.685234 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.685242 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.685251 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.685260 | orchestrator | 2025-05-14 02:34:51.685269 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-05-14 02:34:51.685278 | orchestrator | Wednesday 14 May 2025 02:30:38 +0000 (0:00:00.239) 0:03:14.462 ********* 2025-05-14 02:34:51.685287 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:34:51.685297 | orchestrator | 2025-05-14 02:34:51.685306 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-05-14 02:34:51.685315 | orchestrator | Wednesday 14 May 2025 02:30:39 +0000 (0:00:01.263) 0:03:15.726 ********* 2025-05-14 02:34:51.685334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 02:34:51.685351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 02:34:51.685367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 02:34:51.685377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-14 02:34:51.685387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 02:34:51.685396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-14 02:34:51.685410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 02:34:51.685438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 02:34:51.685453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-14 02:34:51.685463 | orchestrator | 2025-05-14 02:34:51.685471 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-05-14 02:34:51.685481 | orchestrator | Wednesday 14 May 2025 02:30:43 +0000 (0:00:03.635) 0:03:19.361 ********* 2025-05-14 02:34:51.685491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-14 02:34:51.685501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 02:34:51.685520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-14 02:34:51.685529 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.685540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-14 02:34:51.685554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 02:34:51.685564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-14 02:34:51.685572 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.685580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-14 02:34:51.685598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 02:34:51.685607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-14 02:34:51.685615 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.685674 | orchestrator | 2025-05-14 02:34:51.685683 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-05-14 02:34:51.685691 | orchestrator | Wednesday 14 May 2025 02:30:43 +0000 (0:00:00.639) 0:03:20.000 ********* 2025-05-14 02:34:51.685699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-14 02:34:51.685708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-14 02:34:51.685716 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.685729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-14 02:34:51.685738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-14 02:34:51.685746 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.685754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-14 02:34:51.685762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-14 02:34:51.685770 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.685778 | orchestrator | 2025-05-14 02:34:51.685786 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-05-14 02:34:51.685799 | orchestrator | Wednesday 14 May 2025 02:30:44 +0000 (0:00:01.019) 0:03:21.020 ********* 2025-05-14 02:34:51.685813 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:34:51.685835 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:34:51.685848 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:34:51.685861 | orchestrator | 2025-05-14 02:34:51.685874 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-05-14 02:34:51.685886 | orchestrator | Wednesday 14 May 2025 02:30:46 +0000 (0:00:01.408) 0:03:22.428 ********* 2025-05-14 02:34:51.685900 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:34:51.685912 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:34:51.685926 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:34:51.685936 | orchestrator | 2025-05-14 02:34:51.685944 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-05-14 02:34:51.685952 | orchestrator | Wednesday 14 May 2025 02:30:48 +0000 (0:00:02.249) 0:03:24.678 ********* 2025-05-14 02:34:51.685959 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.685967 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.685975 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.685983 | orchestrator | 2025-05-14 02:34:51.685991 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-05-14 02:34:51.685999 | orchestrator | Wednesday 14 May 2025 02:30:48 +0000 (0:00:00.296) 0:03:24.975 ********* 2025-05-14 02:34:51.686006 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:34:51.686014 | orchestrator | 2025-05-14 02:34:51.686065 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-05-14 02:34:51.686073 | orchestrator | Wednesday 14 May 2025 02:30:49 +0000 (0:00:01.311) 0:03:26.286 ********* 2025-05-14 02:34:51.686086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 02:34:51.686096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.686112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 02:34:51.686128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.686137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 02:34:51.686149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.686158 | orchestrator | 2025-05-14 02:34:51.686166 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-05-14 02:34:51.686173 | orchestrator | Wednesday 14 May 2025 02:30:54 +0000 (0:00:04.438) 0:03:30.725 ********* 2025-05-14 02:34:51.686187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-14 02:34:51.686196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.686209 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.686216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-14 02:34:51.686227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.686234 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.686241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-14 02:34:51.686252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.686264 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.686271 | orchestrator | 2025-05-14 02:34:51.686277 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-05-14 02:34:51.686284 | orchestrator | Wednesday 14 May 2025 02:30:55 +0000 (0:00:00.758) 0:03:31.483 ********* 2025-05-14 02:34:51.686291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-14 02:34:51.686298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-14 02:34:51.686305 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.686312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-14 02:34:51.686319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-14 02:34:51.686325 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.686332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-14 02:34:51.686338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-14 02:34:51.686345 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.686352 | orchestrator | 2025-05-14 02:34:51.686359 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-05-14 02:34:51.686365 | orchestrator | Wednesday 14 May 2025 02:30:56 +0000 (0:00:01.107) 0:03:32.590 ********* 2025-05-14 02:34:51.686372 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:34:51.686378 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:34:51.686385 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:34:51.686391 | orchestrator | 2025-05-14 02:34:51.686398 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-05-14 02:34:51.686404 | orchestrator | Wednesday 14 May 2025 02:30:57 +0000 (0:00:01.232) 0:03:33.823 ********* 2025-05-14 02:34:51.686411 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:34:51.686417 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:34:51.686424 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:34:51.686430 | orchestrator | 2025-05-14 02:34:51.686437 | orchestrator | TASK [include_role : manila] *************************************************** 2025-05-14 02:34:51.686444 | orchestrator | Wednesday 14 May 2025 02:30:59 +0000 (0:00:02.123) 0:03:35.947 ********* 2025-05-14 02:34:51.686456 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:34:51.686463 | orchestrator | 2025-05-14 02:34:51.686469 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-05-14 02:34:51.686476 | orchestrator | Wednesday 14 May 2025 02:31:00 +0000 (0:00:01.078) 0:03:37.025 ********* 2025-05-14 02:34:51.686483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-14 02:34:51.686495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.686507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-14 02:34:51.686515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.686522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.686532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.686539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.686551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.686562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-14 02:34:51.686569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.686577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.686587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.686594 | orchestrator | 2025-05-14 02:34:51.686601 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-05-14 02:34:51.686608 | orchestrator | Wednesday 14 May 2025 02:31:05 +0000 (0:00:04.412) 0:03:41.437 ********* 2025-05-14 02:34:51.686640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-14 02:34:51.686660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.686668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.686675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.686682 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.686689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-14 02:34:51.686700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.686713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.686724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.686731 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.686739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-14 02:34:51.686746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.686753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.686764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.686776 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.686783 | orchestrator | 2025-05-14 02:34:51.686790 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-05-14 02:34:51.686796 | orchestrator | Wednesday 14 May 2025 02:31:06 +0000 (0:00:01.155) 0:03:42.592 ********* 2025-05-14 02:34:51.686803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-14 02:34:51.686810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-14 02:34:51.686816 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.686823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-14 02:34:51.686830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-14 02:34:51.686837 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.686847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-14 02:34:51.686854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-14 02:34:51.686860 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.686867 | orchestrator | 2025-05-14 02:34:51.686874 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-05-14 02:34:51.686880 | orchestrator | Wednesday 14 May 2025 02:31:07 +0000 (0:00:01.420) 0:03:44.012 ********* 2025-05-14 02:34:51.686887 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:34:51.686894 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:34:51.686900 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:34:51.686907 | orchestrator | 2025-05-14 02:34:51.686914 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-05-14 02:34:51.686920 | orchestrator | Wednesday 14 May 2025 02:31:09 +0000 (0:00:01.560) 0:03:45.573 ********* 2025-05-14 02:34:51.686927 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:34:51.686933 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:34:51.686940 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:34:51.686946 | orchestrator | 2025-05-14 02:34:51.686952 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-05-14 02:34:51.686959 | orchestrator | Wednesday 14 May 2025 02:31:11 +0000 (0:00:02.416) 0:03:47.990 ********* 2025-05-14 02:34:51.686965 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:34:51.686972 | orchestrator | 2025-05-14 02:34:51.686978 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-05-14 02:34:51.686985 | orchestrator | Wednesday 14 May 2025 02:31:13 +0000 (0:00:01.453) 0:03:49.443 ********* 2025-05-14 02:34:51.686992 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 02:34:51.686998 | orchestrator | 2025-05-14 02:34:51.687005 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-05-14 02:34:51.687019 | orchestrator | Wednesday 14 May 2025 02:31:16 +0000 (0:00:03.203) 0:03:52.647 ********* 2025-05-14 02:34:51.687030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-14 02:34:51.687038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-14 02:34:51.687045 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.687058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-14 02:34:51.687071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-14 02:34:51.687082 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.687094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-14 02:34:51.687102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-14 02:34:51.687109 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.687116 | orchestrator | 2025-05-14 02:34:51.687122 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-05-14 02:34:51.687129 | orchestrator | Wednesday 14 May 2025 02:31:19 +0000 (0:00:03.204) 0:03:55.851 ********* 2025-05-14 02:34:51.687147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-14 02:34:51.687155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-14 02:34:51.687162 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.687174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-14 02:34:51.687187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-14 02:34:51.687195 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.687206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-14 02:34:51.687219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-14 02:34:51.687227 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.687233 | orchestrator | 2025-05-14 02:34:51.687240 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-05-14 02:34:51.687247 | orchestrator | Wednesday 14 May 2025 02:31:22 +0000 (0:00:02.895) 0:03:58.746 ********* 2025-05-14 02:34:51.687253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-14 02:34:51.687269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-14 02:34:51.687276 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.687283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-14 02:34:51.687293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-14 02:34:51.687300 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.687308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-14 02:34:51.687320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-14 02:34:51.687327 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.687362 | orchestrator | 2025-05-14 02:34:51.687370 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-05-14 02:34:51.687377 | orchestrator | Wednesday 14 May 2025 02:31:25 +0000 (0:00:03.481) 0:04:02.227 ********* 2025-05-14 02:34:51.687390 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:34:51.687396 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:34:51.687403 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:34:51.687475 | orchestrator | 2025-05-14 02:34:51.687483 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-05-14 02:34:51.687490 | orchestrator | Wednesday 14 May 2025 02:31:28 +0000 (0:00:02.168) 0:04:04.396 ********* 2025-05-14 02:34:51.687496 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.687503 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.687509 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.687516 | orchestrator | 2025-05-14 02:34:51.687543 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-05-14 02:34:51.687552 | orchestrator | Wednesday 14 May 2025 02:31:29 +0000 (0:00:01.711) 0:04:06.107 ********* 2025-05-14 02:34:51.687558 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.687565 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.687571 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.687577 | orchestrator | 2025-05-14 02:34:51.687584 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-05-14 02:34:51.687591 | orchestrator | Wednesday 14 May 2025 02:31:30 +0000 (0:00:00.516) 0:04:06.624 ********* 2025-05-14 02:34:51.687597 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:34:51.687603 | orchestrator | 2025-05-14 02:34:51.687610 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-05-14 02:34:51.687617 | orchestrator | Wednesday 14 May 2025 02:31:31 +0000 (0:00:01.479) 0:04:08.103 ********* 2025-05-14 02:34:51.687639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-14 02:34:51.687654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-14 02:34:51.687661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-14 02:34:51.687674 | orchestrator | 2025-05-14 02:34:51.687681 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-05-14 02:34:51.687688 | orchestrator | Wednesday 14 May 2025 02:31:33 +0000 (0:00:01.869) 0:04:09.973 ********* 2025-05-14 02:34:51.687700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-14 02:34:51.687708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-14 02:34:51.687715 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.687722 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.687729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-14 02:34:51.687736 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.687743 | orchestrator | 2025-05-14 02:34:51.687749 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-05-14 02:34:51.687756 | orchestrator | Wednesday 14 May 2025 02:31:34 +0000 (0:00:00.418) 0:04:10.392 ********* 2025-05-14 02:34:51.687766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-14 02:34:51.687773 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.687780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-14 02:34:51.687787 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.687794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-14 02:34:51.687806 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.687813 | orchestrator | 2025-05-14 02:34:51.687820 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-05-14 02:34:51.687826 | orchestrator | Wednesday 14 May 2025 02:31:34 +0000 (0:00:00.949) 0:04:11.341 ********* 2025-05-14 02:34:51.687833 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.687840 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.687846 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.687853 | orchestrator | 2025-05-14 02:34:51.687888 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-05-14 02:34:51.687919 | orchestrator | Wednesday 14 May 2025 02:31:35 +0000 (0:00:00.662) 0:04:12.004 ********* 2025-05-14 02:34:51.687927 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.687934 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.687940 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.687947 | orchestrator | 2025-05-14 02:34:51.687954 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-05-14 02:34:51.687960 | orchestrator | Wednesday 14 May 2025 02:31:36 +0000 (0:00:01.229) 0:04:13.234 ********* 2025-05-14 02:34:51.687967 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.687978 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.687985 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.687992 | orchestrator | 2025-05-14 02:34:51.687999 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-05-14 02:34:51.688005 | orchestrator | Wednesday 14 May 2025 02:31:37 +0000 (0:00:00.268) 0:04:13.502 ********* 2025-05-14 02:34:51.688012 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:34:51.688018 | orchestrator | 2025-05-14 02:34:51.688025 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-05-14 02:34:51.688031 | orchestrator | Wednesday 14 May 2025 02:31:38 +0000 (0:00:01.444) 0:04:14.947 ********* 2025-05-14 02:34:51.688038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:34:51.688046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.688057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.688070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.688082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:34:51.688089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.688097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:34:51.688105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:34:51.688115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.688127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:34:51.688134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.688146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:34:51.688153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:34:51.688160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.688172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:34:51.688187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:34:51.688198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:34:51.688206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.688213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.688220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.688242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:34:51.688249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.688276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.688283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:34:51.688290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.688317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.688326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.688337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:34:51.688345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:34:51.688352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:34:51.688359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.688377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.688384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:34:51.688392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:34:51.688411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:34:51.688419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.688491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.688504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:34:51.688524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:34:51.688532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:34:51.688544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.688551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.688558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:34:51.688570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:34:51.688581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:34:51.688589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:34:51.688684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.688697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.688704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:34:51.688761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:34:51.688774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.688781 | orchestrator | 2025-05-14 02:34:51.688788 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-05-14 02:34:51.688795 | orchestrator | Wednesday 14 May 2025 02:31:43 +0000 (0:00:05.005) 0:04:19.953 ********* 2025-05-14 02:34:51.688806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:34:51.688813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.688826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.688833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.688843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:34:51.688849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.688863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:34:51.688870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:34:51.688884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.688891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:34:51.688901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:34:51.688908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.688927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'regist2025-05-14 02:34:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:34:51.688934 | orchestrator | ry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.688945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.688952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.688962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:34:51.688968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:34:51.688987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.688998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.689005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.689012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:34:51.689022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:34:51.689029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.689036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:34:51.689086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:34:51.689100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:34:51.689107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.689117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.689124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.689141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:34:51.689153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:34:51.689160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:34:51.689167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:34:51.689177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.689184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:34:51.689191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.689197 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.689214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:34:51.689310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.689320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:34:51.689327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:34:51.689338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.689344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.689362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:34:51.689375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:34:51.689382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:34:51.689388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:34:51.689398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.689405 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.689434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.689454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:34:51.689461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:34:51.689467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.689474 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.689480 | orchestrator | 2025-05-14 02:34:51.689487 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-05-14 02:34:51.689493 | orchestrator | Wednesday 14 May 2025 02:31:45 +0000 (0:00:01.955) 0:04:21.909 ********* 2025-05-14 02:34:51.689499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-14 02:34:51.689512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-14 02:34:51.689518 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.689524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-14 02:34:51.689531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-14 02:34:51.689537 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.689543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-14 02:34:51.689553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-14 02:34:51.689559 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.689565 | orchestrator | 2025-05-14 02:34:51.689572 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-05-14 02:34:51.689578 | orchestrator | Wednesday 14 May 2025 02:31:47 +0000 (0:00:01.912) 0:04:23.821 ********* 2025-05-14 02:34:51.689584 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:34:51.689590 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:34:51.689596 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:34:51.689602 | orchestrator | 2025-05-14 02:34:51.689608 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-05-14 02:34:51.689615 | orchestrator | Wednesday 14 May 2025 02:31:48 +0000 (0:00:01.460) 0:04:25.281 ********* 2025-05-14 02:34:51.689634 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:34:51.689641 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:34:51.689647 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:34:51.689653 | orchestrator | 2025-05-14 02:34:51.689659 | orchestrator | TASK [include_role : placement] ************************************************ 2025-05-14 02:34:51.689670 | orchestrator | Wednesday 14 May 2025 02:31:51 +0000 (0:00:02.557) 0:04:27.838 ********* 2025-05-14 02:34:51.689677 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:34:51.689683 | orchestrator | 2025-05-14 02:34:51.689689 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-05-14 02:34:51.689695 | orchestrator | Wednesday 14 May 2025 02:31:53 +0000 (0:00:01.593) 0:04:29.432 ********* 2025-05-14 02:34:51.689701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 02:34:51.689709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 02:34:51.689747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 02:34:51.689760 | orchestrator | 2025-05-14 02:34:51.689767 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-05-14 02:34:51.689773 | orchestrator | Wednesday 14 May 2025 02:31:56 +0000 (0:00:03.740) 0:04:33.172 ********* 2025-05-14 02:34:51.689784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-14 02:34:51.689791 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.689797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-14 02:34:51.689804 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.689810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-14 02:34:51.689817 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.689823 | orchestrator | 2025-05-14 02:34:51.689829 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-05-14 02:34:51.689876 | orchestrator | Wednesday 14 May 2025 02:31:57 +0000 (0:00:00.709) 0:04:33.881 ********* 2025-05-14 02:34:51.689903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-14 02:34:51.689914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-14 02:34:51.689920 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.689927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-14 02:34:51.689933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-14 02:34:51.689939 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.689946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-14 02:34:51.689952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-14 02:34:51.689958 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.689964 | orchestrator | 2025-05-14 02:34:51.689971 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-05-14 02:34:51.689977 | orchestrator | Wednesday 14 May 2025 02:31:58 +0000 (0:00:00.951) 0:04:34.833 ********* 2025-05-14 02:34:51.689983 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:34:51.689989 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:34:51.689995 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:34:51.690001 | orchestrator | 2025-05-14 02:34:51.690007 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-05-14 02:34:51.690013 | orchestrator | Wednesday 14 May 2025 02:31:59 +0000 (0:00:01.492) 0:04:36.326 ********* 2025-05-14 02:34:51.690068 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:34:51.690075 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:34:51.690086 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:34:51.690093 | orchestrator | 2025-05-14 02:34:51.690099 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-05-14 02:34:51.690105 | orchestrator | Wednesday 14 May 2025 02:32:02 +0000 (0:00:02.424) 0:04:38.750 ********* 2025-05-14 02:34:51.690111 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:34:51.690117 | orchestrator | 2025-05-14 02:34:51.690123 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-05-14 02:34:51.690129 | orchestrator | Wednesday 14 May 2025 02:32:04 +0000 (0:00:01.698) 0:04:40.449 ********* 2025-05-14 02:34:51.690136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 02:34:51.690150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.690160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.690172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 02:34:51.690179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.690186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.690200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 02:34:51.690208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.690214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.690220 | orchestrator | 2025-05-14 02:34:51.690227 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-05-14 02:34:51.690233 | orchestrator | Wednesday 14 May 2025 02:32:09 +0000 (0:00:05.372) 0:04:45.821 ********* 2025-05-14 02:34:51.690244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-14 02:34:51.690255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.690262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.690268 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.690279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-14 02:34:51.690289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.690296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.690307 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.690314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-14 02:34:51.690324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.690331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.690337 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.690343 | orchestrator | 2025-05-14 02:34:51.690349 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-05-14 02:34:51.690356 | orchestrator | Wednesday 14 May 2025 02:32:10 +0000 (0:00:00.914) 0:04:46.736 ********* 2025-05-14 02:34:51.690362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-14 02:34:51.690368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-14 02:34:51.690375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-14 02:34:51.690385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-14 02:34:51.690391 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.690398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-14 02:34:51.690408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-14 02:34:51.690415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-14 02:34:51.690421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-14 02:34:51.690427 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.690433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-14 02:34:51.690440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-14 02:34:51.690446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-14 02:34:51.690452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-14 02:34:51.690459 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.690465 | orchestrator | 2025-05-14 02:34:51.690471 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-05-14 02:34:51.690477 | orchestrator | Wednesday 14 May 2025 02:32:11 +0000 (0:00:01.301) 0:04:48.038 ********* 2025-05-14 02:34:51.690484 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:34:51.690490 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:34:51.690496 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:34:51.690545 | orchestrator | 2025-05-14 02:34:51.690551 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-05-14 02:34:51.690563 | orchestrator | Wednesday 14 May 2025 02:32:13 +0000 (0:00:01.444) 0:04:49.482 ********* 2025-05-14 02:34:51.690569 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:34:51.690576 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:34:51.690582 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:34:51.690588 | orchestrator | 2025-05-14 02:34:51.690594 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-05-14 02:34:51.690600 | orchestrator | Wednesday 14 May 2025 02:32:15 +0000 (0:00:02.520) 0:04:52.003 ********* 2025-05-14 02:34:51.690606 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:34:51.690612 | orchestrator | 2025-05-14 02:34:51.690678 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-05-14 02:34:51.690686 | orchestrator | Wednesday 14 May 2025 02:32:17 +0000 (0:00:01.477) 0:04:53.480 ********* 2025-05-14 02:34:51.690692 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-05-14 02:34:51.690698 | orchestrator | 2025-05-14 02:34:51.690704 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-05-14 02:34:51.690710 | orchestrator | Wednesday 14 May 2025 02:32:18 +0000 (0:00:01.535) 0:04:55.016 ********* 2025-05-14 02:34:51.690716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-14 02:34:51.690733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-14 02:34:51.690740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-14 02:34:51.690747 | orchestrator | 2025-05-14 02:34:51.690753 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-05-14 02:34:51.690759 | orchestrator | Wednesday 14 May 2025 02:32:23 +0000 (0:00:05.301) 0:05:00.318 ********* 2025-05-14 02:34:51.690766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-14 02:34:51.690772 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.690778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-14 02:34:51.690785 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.690796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-14 02:34:51.690803 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.690809 | orchestrator | 2025-05-14 02:34:51.690815 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-05-14 02:34:51.690821 | orchestrator | Wednesday 14 May 2025 02:32:25 +0000 (0:00:01.466) 0:05:01.784 ********* 2025-05-14 02:34:51.690827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-14 02:34:51.690834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-14 02:34:51.690846 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.690851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-14 02:34:51.690861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-14 02:34:51.690867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-14 02:34:51.690872 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.690888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-14 02:34:51.690894 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.690899 | orchestrator | 2025-05-14 02:34:51.690905 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-14 02:34:51.690910 | orchestrator | Wednesday 14 May 2025 02:32:27 +0000 (0:00:02.144) 0:05:03.928 ********* 2025-05-14 02:34:51.690915 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:34:51.690921 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:34:51.690926 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:34:51.690931 | orchestrator | 2025-05-14 02:34:51.690937 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-14 02:34:51.690942 | orchestrator | Wednesday 14 May 2025 02:32:30 +0000 (0:00:02.999) 0:05:06.928 ********* 2025-05-14 02:34:51.690947 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:34:51.690952 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:34:51.690957 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:34:51.690963 | orchestrator | 2025-05-14 02:34:51.690968 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-05-14 02:34:51.690973 | orchestrator | Wednesday 14 May 2025 02:32:34 +0000 (0:00:03.756) 0:05:10.685 ********* 2025-05-14 02:34:51.690979 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-05-14 02:34:51.690984 | orchestrator | 2025-05-14 02:34:51.690989 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-05-14 02:34:51.690994 | orchestrator | Wednesday 14 May 2025 02:32:35 +0000 (0:00:01.378) 0:05:12.064 ********* 2025-05-14 02:34:51.691000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-14 02:34:51.691005 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.691011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-14 02:34:51.691021 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.691030 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-14 02:34:51.691036 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.691041 | orchestrator | 2025-05-14 02:34:51.691046 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-05-14 02:34:51.691052 | orchestrator | Wednesday 14 May 2025 02:32:37 +0000 (0:00:01.690) 0:05:13.754 ********* 2025-05-14 02:34:51.691057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-14 02:34:51.691063 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.691115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-14 02:34:51.691122 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.691127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-14 02:34:51.691133 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.691138 | orchestrator | 2025-05-14 02:34:51.691143 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-05-14 02:34:51.691149 | orchestrator | Wednesday 14 May 2025 02:32:39 +0000 (0:00:01.926) 0:05:15.680 ********* 2025-05-14 02:34:51.691154 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.691160 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.691165 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.691170 | orchestrator | 2025-05-14 02:34:51.691176 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-14 02:34:51.691181 | orchestrator | Wednesday 14 May 2025 02:32:41 +0000 (0:00:02.142) 0:05:17.822 ********* 2025-05-14 02:34:51.691187 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:34:51.691192 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:34:51.691198 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:34:51.691203 | orchestrator | 2025-05-14 02:34:51.691209 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-14 02:34:51.691218 | orchestrator | Wednesday 14 May 2025 02:32:44 +0000 (0:00:03.044) 0:05:20.867 ********* 2025-05-14 02:34:51.691223 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:34:51.691228 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:34:51.691234 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:34:51.691239 | orchestrator | 2025-05-14 02:34:51.691244 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-05-14 02:34:51.691250 | orchestrator | Wednesday 14 May 2025 02:32:48 +0000 (0:00:03.534) 0:05:24.401 ********* 2025-05-14 02:34:51.691255 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-05-14 02:34:51.691260 | orchestrator | 2025-05-14 02:34:51.691266 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-05-14 02:34:51.691271 | orchestrator | Wednesday 14 May 2025 02:32:49 +0000 (0:00:01.273) 0:05:25.674 ********* 2025-05-14 02:34:51.691281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-14 02:34:51.691286 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.691292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-14 02:34:51.691297 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.691303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-14 02:34:51.691308 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.691314 | orchestrator | 2025-05-14 02:34:51.691319 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-05-14 02:34:51.691325 | orchestrator | Wednesday 14 May 2025 02:32:50 +0000 (0:00:01.602) 0:05:27.277 ********* 2025-05-14 02:34:51.691341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-14 02:34:51.691347 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.691352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-14 02:34:51.691362 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.691368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-14 02:34:51.691373 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.691379 | orchestrator | 2025-05-14 02:34:51.691384 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-05-14 02:34:51.691389 | orchestrator | Wednesday 14 May 2025 02:32:52 +0000 (0:00:01.687) 0:05:28.964 ********* 2025-05-14 02:34:51.691395 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.691400 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.691405 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.691411 | orchestrator | 2025-05-14 02:34:51.691416 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-14 02:34:51.691422 | orchestrator | Wednesday 14 May 2025 02:32:54 +0000 (0:00:01.916) 0:05:30.880 ********* 2025-05-14 02:34:51.691427 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:34:51.691432 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:34:51.691437 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:34:51.691443 | orchestrator | 2025-05-14 02:34:51.691448 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-14 02:34:51.691453 | orchestrator | Wednesday 14 May 2025 02:32:57 +0000 (0:00:02.597) 0:05:33.478 ********* 2025-05-14 02:34:51.691459 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:34:51.691464 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:34:51.691469 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:34:51.691475 | orchestrator | 2025-05-14 02:34:51.691484 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-05-14 02:34:51.691489 | orchestrator | Wednesday 14 May 2025 02:33:00 +0000 (0:00:03.488) 0:05:36.967 ********* 2025-05-14 02:34:51.691494 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:34:51.691499 | orchestrator | 2025-05-14 02:34:51.691505 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-05-14 02:34:51.691510 | orchestrator | Wednesday 14 May 2025 02:33:02 +0000 (0:00:01.739) 0:05:38.706 ********* 2025-05-14 02:34:51.691515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-14 02:34:51.691532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-14 02:34:51.691542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-14 02:34:51.691549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-14 02:34:51.691554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.691563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-14 02:34:51.691569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-14 02:34:51.691585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-14 02:34:51.691596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-14 02:34:51.691602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.691608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-14 02:34:51.691614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-14 02:34:51.691728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-14 02:34:51.691736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-14 02:34:51.691818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.691837 | orchestrator | 2025-05-14 02:34:51.691843 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-05-14 02:34:51.691848 | orchestrator | Wednesday 14 May 2025 02:33:07 +0000 (0:00:04.793) 0:05:43.500 ********* 2025-05-14 02:34:51.691854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-14 02:34:51.691860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-14 02:34:51.691869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-14 02:34:51.691875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-14 02:34:51.691885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.691891 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.691910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-14 02:34:51.691916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-14 02:34:51.691922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-14 02:34:51.691930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-14 02:34:51.691936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.691945 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.691961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-14 02:34:51.691968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-14 02:34:51.691973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-14 02:34:51.691979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-14 02:34:51.691997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:34:51.692003 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.692009 | orchestrator | 2025-05-14 02:34:51.692014 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-05-14 02:34:51.692020 | orchestrator | Wednesday 14 May 2025 02:33:08 +0000 (0:00:00.959) 0:05:44.459 ********* 2025-05-14 02:34:51.692025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-14 02:34:51.692035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-14 02:34:51.692040 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.692046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-14 02:34:51.692051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-14 02:34:51.692057 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.692062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-14 02:34:51.692068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-14 02:34:51.692084 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.692090 | orchestrator | 2025-05-14 02:34:51.692095 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-05-14 02:34:51.692101 | orchestrator | Wednesday 14 May 2025 02:33:09 +0000 (0:00:01.481) 0:05:45.941 ********* 2025-05-14 02:34:51.692106 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:34:51.692112 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:34:51.692117 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:34:51.692122 | orchestrator | 2025-05-14 02:34:51.692128 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-05-14 02:34:51.692133 | orchestrator | Wednesday 14 May 2025 02:33:11 +0000 (0:00:01.459) 0:05:47.401 ********* 2025-05-14 02:34:51.692138 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:34:51.692143 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:34:51.692149 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:34:51.692154 | orchestrator | 2025-05-14 02:34:51.692159 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-05-14 02:34:51.692165 | orchestrator | Wednesday 14 May 2025 02:33:13 +0000 (0:00:02.495) 0:05:49.896 ********* 2025-05-14 02:34:51.692170 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:34:51.692175 | orchestrator | 2025-05-14 02:34:51.692181 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-05-14 02:34:51.692186 | orchestrator | Wednesday 14 May 2025 02:33:15 +0000 (0:00:01.574) 0:05:51.471 ********* 2025-05-14 02:34:51.692192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-14 02:34:51.692202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-14 02:34:51.692211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-14 02:34:51.692227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-14 02:34:51.692235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-14 02:34:51.692245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-14 02:34:51.692254 | orchestrator | 2025-05-14 02:34:51.692260 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-05-14 02:34:51.692265 | orchestrator | Wednesday 14 May 2025 02:33:21 +0000 (0:00:05.937) 0:05:57.408 ********* 2025-05-14 02:34:51.692271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-14 02:34:51.692288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-14 02:34:51.692294 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.692300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-14 02:34:51.692309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-14 02:34:51.692318 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.692324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-14 02:34:51.692341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-14 02:34:51.692347 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.692352 | orchestrator | 2025-05-14 02:34:51.692358 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-05-14 02:34:51.692363 | orchestrator | Wednesday 14 May 2025 02:33:21 +0000 (0:00:00.928) 0:05:58.336 ********* 2025-05-14 02:34:51.692369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-14 02:34:51.692374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-14 02:34:51.692380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-14 02:34:51.692390 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.692395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-14 02:34:51.692401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-14 02:34:51.692407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-14 02:34:51.692413 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.692418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-14 02:34:51.692428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-14 02:34:51.692434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-14 02:34:51.692440 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.692445 | orchestrator | 2025-05-14 02:34:51.692451 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-05-14 02:34:51.692456 | orchestrator | Wednesday 14 May 2025 02:33:23 +0000 (0:00:01.435) 0:05:59.772 ********* 2025-05-14 02:34:51.692462 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.692467 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.692472 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.692478 | orchestrator | 2025-05-14 02:34:51.692484 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-05-14 02:34:51.692489 | orchestrator | Wednesday 14 May 2025 02:33:24 +0000 (0:00:00.741) 0:06:00.513 ********* 2025-05-14 02:34:51.692495 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.692500 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.692505 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.692511 | orchestrator | 2025-05-14 02:34:51.692516 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-05-14 02:34:51.692521 | orchestrator | Wednesday 14 May 2025 02:33:25 +0000 (0:00:01.741) 0:06:02.255 ********* 2025-05-14 02:34:51.692527 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:34:51.692532 | orchestrator | 2025-05-14 02:34:51.692537 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-05-14 02:34:51.692543 | orchestrator | Wednesday 14 May 2025 02:33:27 +0000 (0:00:01.865) 0:06:04.120 ********* 2025-05-14 02:34:51.692559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-14 02:34:51.692571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 02:34:51.692576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-14 02:34:51.692582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:34:51.692591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-14 02:34:51.692597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:34:51.692614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 02:34:51.692652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 02:34:51.692663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 02:34:51.692669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:34:51.692675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:34:51.692684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:34:51.692690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:34:51.692695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 02:34:51.692713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 02:34:51.692724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-14 02:34:51.692730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:34:51.692739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-14 02:34:51.692745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:34:51.692761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:34:51.692771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:34:51.692777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:34:51.692782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:34:51.692791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:34:51.692797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:34:51.692803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:34:51.692813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:34:51.692822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-14 02:34:51.692828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:34:51.692837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:34:51.692843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:34:51.692848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:34:51.692858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:34:51.692868 | orchestrator | 2025-05-14 02:34:51.692873 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-05-14 02:34:51.692879 | orchestrator | Wednesday 14 May 2025 02:33:33 +0000 (0:00:05.254) 0:06:09.375 ********* 2025-05-14 02:34:51.692885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 02:34:51.692890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 02:34:51.692896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:34:51.692905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:34:51.692911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 02:34:51.692920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 02:34:51.692929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:34:51.692935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:34:51.692940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:34:51.692947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:34:51.692952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:34:51.692957 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.692962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 02:34:51.692978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 02:34:51.692983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:34:51.692988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:34:51.692993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 02:34:51.693001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 02:34:51.693006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:34:51.693019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:34:51.693024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:34:51.693029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:34:51.693034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:34:51.693039 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.693046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 02:34:51.693051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 02:34:51.693060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:34:51.693068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:34:51.693073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 02:34:51.693078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 02:34:51.693086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:34:51.693092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:34:51.693100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:34:51.693105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:34:51.693114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:34:51.693119 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.693124 | orchestrator | 2025-05-14 02:34:51.693129 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-05-14 02:34:51.693134 | orchestrator | Wednesday 14 May 2025 02:33:34 +0000 (0:00:01.402) 0:06:10.778 ********* 2025-05-14 02:34:51.693139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-14 02:34:51.693144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-14 02:34:51.693149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-14 02:34:51.693156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-14 02:34:51.693160 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.693165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-14 02:34:51.693170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-14 02:34:51.693180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-14 02:34:51.693186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-14 02:34:51.693190 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.693195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-14 02:34:51.693200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-14 02:34:51.693205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-14 02:34:51.693210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-14 02:34:51.693215 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.693219 | orchestrator | 2025-05-14 02:34:51.693227 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-05-14 02:34:51.693232 | orchestrator | Wednesday 14 May 2025 02:33:36 +0000 (0:00:01.672) 0:06:12.450 ********* 2025-05-14 02:34:51.693237 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.693242 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.693246 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.693251 | orchestrator | 2025-05-14 02:34:51.693256 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-05-14 02:34:51.693261 | orchestrator | Wednesday 14 May 2025 02:33:36 +0000 (0:00:00.744) 0:06:13.194 ********* 2025-05-14 02:34:51.693265 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.693270 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.693275 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.693279 | orchestrator | 2025-05-14 02:34:51.693284 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-05-14 02:34:51.693289 | orchestrator | Wednesday 14 May 2025 02:33:38 +0000 (0:00:01.587) 0:06:14.781 ********* 2025-05-14 02:34:51.693293 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:34:51.693298 | orchestrator | 2025-05-14 02:34:51.693303 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-05-14 02:34:51.693307 | orchestrator | Wednesday 14 May 2025 02:33:40 +0000 (0:00:01.600) 0:06:16.381 ********* 2025-05-14 02:34:51.693312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-14 02:34:51.693325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-14 02:34:51.693331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-14 02:34:51.693336 | orchestrator | 2025-05-14 02:34:51.693341 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-05-14 02:34:51.693346 | orchestrator | Wednesday 14 May 2025 02:33:43 +0000 (0:00:03.131) 0:06:19.513 ********* 2025-05-14 02:34:51.693354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-14 02:34:51.693360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-14 02:34:51.693369 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.693374 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.693382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-14 02:34:51.693388 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.693393 | orchestrator | 2025-05-14 02:34:51.693397 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-05-14 02:34:51.693402 | orchestrator | Wednesday 14 May 2025 02:33:43 +0000 (0:00:00.734) 0:06:20.247 ********* 2025-05-14 02:34:51.693407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-14 02:34:51.693412 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.693417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-14 02:34:51.693421 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.693426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-14 02:34:51.693431 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.693436 | orchestrator | 2025-05-14 02:34:51.693440 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-05-14 02:34:51.693445 | orchestrator | Wednesday 14 May 2025 02:33:44 +0000 (0:00:00.946) 0:06:21.193 ********* 2025-05-14 02:34:51.693450 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.693455 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.693459 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.693464 | orchestrator | 2025-05-14 02:34:51.693472 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-05-14 02:34:51.693477 | orchestrator | Wednesday 14 May 2025 02:33:45 +0000 (0:00:00.769) 0:06:21.963 ********* 2025-05-14 02:34:51.693481 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.693486 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.693491 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.693495 | orchestrator | 2025-05-14 02:34:51.693500 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-05-14 02:34:51.693505 | orchestrator | Wednesday 14 May 2025 02:33:47 +0000 (0:00:01.805) 0:06:23.768 ********* 2025-05-14 02:34:51.693514 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:34:51.693519 | orchestrator | 2025-05-14 02:34:51.693523 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-05-14 02:34:51.693528 | orchestrator | Wednesday 14 May 2025 02:33:49 +0000 (0:00:01.983) 0:06:25.752 ********* 2025-05-14 02:34:51.693533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-14 02:34:51.693541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-14 02:34:51.693547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-14 02:34:51.693555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-14 02:34:51.693566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-14 02:34:51.693571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-14 02:34:51.693576 | orchestrator | 2025-05-14 02:34:51.693581 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-05-14 02:34:51.693585 | orchestrator | Wednesday 14 May 2025 02:33:57 +0000 (0:00:08.178) 0:06:33.930 ********* 2025-05-14 02:34:51.693593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-14 02:34:51.693601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-14 02:34:51.693610 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.693615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-14 02:34:51.693631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-14 02:34:51.693637 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.693644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-14 02:34:51.693650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-14 02:34:51.693658 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.693663 | orchestrator | 2025-05-14 02:34:51.693671 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-05-14 02:34:51.693676 | orchestrator | Wednesday 14 May 2025 02:33:58 +0000 (0:00:00.953) 0:06:34.883 ********* 2025-05-14 02:34:51.693681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-14 02:34:51.693686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-14 02:34:51.693691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-14 02:34:51.693696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-14 02:34:51.693700 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.693705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-14 02:34:51.693710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-14 02:34:51.693715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-14 02:34:51.693720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-14 02:34:51.693725 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.693730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-14 02:34:51.693738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-14 02:34:51.693743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-14 02:34:51.693747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-14 02:34:51.693752 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.693757 | orchestrator | 2025-05-14 02:34:51.693762 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-05-14 02:34:51.693767 | orchestrator | Wednesday 14 May 2025 02:34:00 +0000 (0:00:01.752) 0:06:36.636 ********* 2025-05-14 02:34:51.693771 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:34:51.693776 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:34:51.693787 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:34:51.693792 | orchestrator | 2025-05-14 02:34:51.693796 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-05-14 02:34:51.693801 | orchestrator | Wednesday 14 May 2025 02:34:01 +0000 (0:00:01.526) 0:06:38.163 ********* 2025-05-14 02:34:51.693806 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:34:51.693811 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:34:51.693815 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:34:51.693820 | orchestrator | 2025-05-14 02:34:51.693825 | orchestrator | TASK [include_role : swift] **************************************************** 2025-05-14 02:34:51.693830 | orchestrator | Wednesday 14 May 2025 02:34:04 +0000 (0:00:02.662) 0:06:40.826 ********* 2025-05-14 02:34:51.693834 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.693839 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.693843 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.693848 | orchestrator | 2025-05-14 02:34:51.693853 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-05-14 02:34:51.693858 | orchestrator | Wednesday 14 May 2025 02:34:04 +0000 (0:00:00.328) 0:06:41.154 ********* 2025-05-14 02:34:51.693862 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.693867 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.693872 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.693877 | orchestrator | 2025-05-14 02:34:51.693884 | orchestrator | TASK [include_role : trove] **************************************************** 2025-05-14 02:34:51.693889 | orchestrator | Wednesday 14 May 2025 02:34:05 +0000 (0:00:00.581) 0:06:41.735 ********* 2025-05-14 02:34:51.693894 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.693899 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.693903 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.693908 | orchestrator | 2025-05-14 02:34:51.693913 | orchestrator | TASK [include_role : venus] **************************************************** 2025-05-14 02:34:51.693917 | orchestrator | Wednesday 14 May 2025 02:34:05 +0000 (0:00:00.552) 0:06:42.288 ********* 2025-05-14 02:34:51.693922 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.693927 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.693931 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.693936 | orchestrator | 2025-05-14 02:34:51.693941 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-05-14 02:34:51.693946 | orchestrator | Wednesday 14 May 2025 02:34:06 +0000 (0:00:00.307) 0:06:42.596 ********* 2025-05-14 02:34:51.693950 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.693955 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.693960 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.693964 | orchestrator | 2025-05-14 02:34:51.693969 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-05-14 02:34:51.693974 | orchestrator | Wednesday 14 May 2025 02:34:06 +0000 (0:00:00.600) 0:06:43.196 ********* 2025-05-14 02:34:51.693978 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.693983 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.693988 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.693992 | orchestrator | 2025-05-14 02:34:51.693997 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-05-14 02:34:51.694002 | orchestrator | Wednesday 14 May 2025 02:34:07 +0000 (0:00:01.086) 0:06:44.283 ********* 2025-05-14 02:34:51.694006 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:34:51.694011 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:34:51.694037 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:34:51.694044 | orchestrator | 2025-05-14 02:34:51.694048 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-05-14 02:34:51.694053 | orchestrator | Wednesday 14 May 2025 02:34:08 +0000 (0:00:00.688) 0:06:44.971 ********* 2025-05-14 02:34:51.694058 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:34:51.694063 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:34:51.694067 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:34:51.694072 | orchestrator | 2025-05-14 02:34:51.694083 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-05-14 02:34:51.694088 | orchestrator | Wednesday 14 May 2025 02:34:09 +0000 (0:00:00.636) 0:06:45.608 ********* 2025-05-14 02:34:51.694093 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:34:51.694098 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:34:51.694102 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:34:51.694107 | orchestrator | 2025-05-14 02:34:51.694112 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-05-14 02:34:51.694116 | orchestrator | Wednesday 14 May 2025 02:34:10 +0000 (0:00:01.321) 0:06:46.929 ********* 2025-05-14 02:34:51.694121 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:34:51.694125 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:34:51.694130 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:34:51.694135 | orchestrator | 2025-05-14 02:34:51.694140 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-05-14 02:34:51.694145 | orchestrator | Wednesday 14 May 2025 02:34:11 +0000 (0:00:01.303) 0:06:48.232 ********* 2025-05-14 02:34:51.694149 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:34:51.694154 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:34:51.694159 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:34:51.694163 | orchestrator | 2025-05-14 02:34:51.694172 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-05-14 02:34:51.694177 | orchestrator | Wednesday 14 May 2025 02:34:12 +0000 (0:00:00.953) 0:06:49.186 ********* 2025-05-14 02:34:51.694181 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:34:51.694186 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:34:51.694190 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:34:51.694195 | orchestrator | 2025-05-14 02:34:51.694200 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-05-14 02:34:51.694205 | orchestrator | Wednesday 14 May 2025 02:34:23 +0000 (0:00:10.670) 0:06:59.857 ********* 2025-05-14 02:34:51.694209 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:34:51.694214 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:34:51.694219 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:34:51.694223 | orchestrator | 2025-05-14 02:34:51.694228 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-05-14 02:34:51.694233 | orchestrator | Wednesday 14 May 2025 02:34:24 +0000 (0:00:01.081) 0:07:00.939 ********* 2025-05-14 02:34:51.694237 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:34:51.694242 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:34:51.694247 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:34:51.694251 | orchestrator | 2025-05-14 02:34:51.694256 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-05-14 02:34:51.694261 | orchestrator | Wednesday 14 May 2025 02:34:30 +0000 (0:00:06.322) 0:07:07.262 ********* 2025-05-14 02:34:51.694265 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:34:51.694270 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:34:51.694274 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:34:51.694279 | orchestrator | 2025-05-14 02:34:51.694284 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-05-14 02:34:51.694288 | orchestrator | Wednesday 14 May 2025 02:34:35 +0000 (0:00:04.640) 0:07:11.902 ********* 2025-05-14 02:34:51.694293 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:34:51.694298 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:34:51.694303 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:34:51.694308 | orchestrator | 2025-05-14 02:34:51.694312 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-05-14 02:34:51.694317 | orchestrator | Wednesday 14 May 2025 02:34:43 +0000 (0:00:08.183) 0:07:20.086 ********* 2025-05-14 02:34:51.694322 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.694327 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.694331 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.694336 | orchestrator | 2025-05-14 02:34:51.694341 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-05-14 02:34:51.694353 | orchestrator | Wednesday 14 May 2025 02:34:44 +0000 (0:00:00.577) 0:07:20.663 ********* 2025-05-14 02:34:51.694358 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.694362 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.694367 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.694372 | orchestrator | 2025-05-14 02:34:51.694377 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-05-14 02:34:51.694381 | orchestrator | Wednesday 14 May 2025 02:34:44 +0000 (0:00:00.330) 0:07:20.993 ********* 2025-05-14 02:34:51.694386 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.694391 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.694395 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.694400 | orchestrator | 2025-05-14 02:34:51.694405 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-05-14 02:34:51.694409 | orchestrator | Wednesday 14 May 2025 02:34:45 +0000 (0:00:00.616) 0:07:21.610 ********* 2025-05-14 02:34:51.694414 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.694419 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.694423 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.694428 | orchestrator | 2025-05-14 02:34:51.694433 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-05-14 02:34:51.694437 | orchestrator | Wednesday 14 May 2025 02:34:45 +0000 (0:00:00.647) 0:07:22.257 ********* 2025-05-14 02:34:51.694442 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.694447 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.694451 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.694456 | orchestrator | 2025-05-14 02:34:51.694461 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-05-14 02:34:51.694465 | orchestrator | Wednesday 14 May 2025 02:34:46 +0000 (0:00:00.621) 0:07:22.878 ********* 2025-05-14 02:34:51.694470 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:34:51.694475 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:34:51.694479 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:34:51.694484 | orchestrator | 2025-05-14 02:34:51.694488 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-05-14 02:34:51.694493 | orchestrator | Wednesday 14 May 2025 02:34:46 +0000 (0:00:00.358) 0:07:23.237 ********* 2025-05-14 02:34:51.694498 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:34:51.694503 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:34:51.694507 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:34:51.694512 | orchestrator | 2025-05-14 02:34:51.694516 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-05-14 02:34:51.694521 | orchestrator | Wednesday 14 May 2025 02:34:48 +0000 (0:00:01.200) 0:07:24.437 ********* 2025-05-14 02:34:51.694526 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:34:51.694531 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:34:51.694535 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:34:51.694540 | orchestrator | 2025-05-14 02:34:51.694545 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:34:51.694549 | orchestrator | testbed-node-0 : ok=127  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-05-14 02:34:51.694554 | orchestrator | testbed-node-1 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-05-14 02:34:51.694559 | orchestrator | testbed-node-2 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-05-14 02:34:51.694564 | orchestrator | 2025-05-14 02:34:51.694569 | orchestrator | 2025-05-14 02:34:51.694576 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:34:51.694581 | orchestrator | Wednesday 14 May 2025 02:34:49 +0000 (0:00:01.232) 0:07:25.670 ********* 2025-05-14 02:34:51.694585 | orchestrator | =============================================================================== 2025-05-14 02:34:51.694590 | orchestrator | loadbalancer : Start backup haproxy container -------------------------- 10.67s 2025-05-14 02:34:51.694598 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 8.18s 2025-05-14 02:34:51.694603 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 8.18s 2025-05-14 02:34:51.694608 | orchestrator | haproxy-config : Copying over heat haproxy config ----------------------- 7.07s 2025-05-14 02:34:51.694612 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 6.32s 2025-05-14 02:34:51.694617 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 6.13s 2025-05-14 02:34:51.694635 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.94s 2025-05-14 02:34:51.694640 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 5.92s 2025-05-14 02:34:51.694644 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 5.89s 2025-05-14 02:34:51.694649 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.85s 2025-05-14 02:34:51.694654 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 5.67s 2025-05-14 02:34:51.694658 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 5.58s 2025-05-14 02:34:51.694663 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.40s 2025-05-14 02:34:51.694668 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 5.37s 2025-05-14 02:34:51.694672 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 5.30s 2025-05-14 02:34:51.694677 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.25s 2025-05-14 02:34:51.694682 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.01s 2025-05-14 02:34:51.694686 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 4.99s 2025-05-14 02:34:51.694693 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 4.79s 2025-05-14 02:34:51.694698 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.64s 2025-05-14 02:34:54.724725 | orchestrator | 2025-05-14 02:34:54 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:34:54.725737 | orchestrator | 2025-05-14 02:34:54 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:34:54.727568 | orchestrator | 2025-05-14 02:34:54 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:34:54.729603 | orchestrator | 2025-05-14 02:34:54 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state STARTED 2025-05-14 02:34:54.729909 | orchestrator | 2025-05-14 02:34:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:34:57.767926 | orchestrator | 2025-05-14 02:34:57 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:34:57.768129 | orchestrator | 2025-05-14 02:34:57 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:34:57.773967 | orchestrator | 2025-05-14 02:34:57 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:34:57.774113 | orchestrator | 2025-05-14 02:34:57 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state STARTED 2025-05-14 02:34:57.774132 | orchestrator | 2025-05-14 02:34:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:35:00.802734 | orchestrator | 2025-05-14 02:35:00 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:35:00.804537 | orchestrator | 2025-05-14 02:35:00 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:35:00.807129 | orchestrator | 2025-05-14 02:35:00 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:35:00.810090 | orchestrator | 2025-05-14 02:35:00 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state STARTED 2025-05-14 02:35:00.810158 | orchestrator | 2025-05-14 02:35:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:35:03.847027 | orchestrator | 2025-05-14 02:35:03 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:35:03.847447 | orchestrator | 2025-05-14 02:35:03 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:35:03.848193 | orchestrator | 2025-05-14 02:35:03 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:35:03.848854 | orchestrator | 2025-05-14 02:35:03 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state STARTED 2025-05-14 02:35:03.848881 | orchestrator | 2025-05-14 02:35:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:35:06.886782 | orchestrator | 2025-05-14 02:35:06 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:35:06.888739 | orchestrator | 2025-05-14 02:35:06 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:35:06.888772 | orchestrator | 2025-05-14 02:35:06 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:35:06.888905 | orchestrator | 2025-05-14 02:35:06 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state STARTED 2025-05-14 02:35:06.888913 | orchestrator | 2025-05-14 02:35:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:35:09.930846 | orchestrator | 2025-05-14 02:35:09 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:35:09.931711 | orchestrator | 2025-05-14 02:35:09 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:35:09.932297 | orchestrator | 2025-05-14 02:35:09 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:35:09.934764 | orchestrator | 2025-05-14 02:35:09 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state STARTED 2025-05-14 02:35:09.935094 | orchestrator | 2025-05-14 02:35:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:35:12.994328 | orchestrator | 2025-05-14 02:35:12 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:35:12.994741 | orchestrator | 2025-05-14 02:35:12 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:35:12.995417 | orchestrator | 2025-05-14 02:35:12 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:35:12.996138 | orchestrator | 2025-05-14 02:35:12 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state STARTED 2025-05-14 02:35:12.996174 | orchestrator | 2025-05-14 02:35:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:35:16.030531 | orchestrator | 2025-05-14 02:35:16 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:35:16.030769 | orchestrator | 2025-05-14 02:35:16 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:35:16.033611 | orchestrator | 2025-05-14 02:35:16 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:35:16.034186 | orchestrator | 2025-05-14 02:35:16 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state STARTED 2025-05-14 02:35:16.034248 | orchestrator | 2025-05-14 02:35:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:35:19.067212 | orchestrator | 2025-05-14 02:35:19 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:35:19.067815 | orchestrator | 2025-05-14 02:35:19 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:35:19.069084 | orchestrator | 2025-05-14 02:35:19 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:35:19.069787 | orchestrator | 2025-05-14 02:35:19 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state STARTED 2025-05-14 02:35:19.069865 | orchestrator | 2025-05-14 02:35:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:35:22.109938 | orchestrator | 2025-05-14 02:35:22 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:35:22.110267 | orchestrator | 2025-05-14 02:35:22 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:35:22.111904 | orchestrator | 2025-05-14 02:35:22 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:35:22.112588 | orchestrator | 2025-05-14 02:35:22 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state STARTED 2025-05-14 02:35:22.112612 | orchestrator | 2025-05-14 02:35:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:35:25.163266 | orchestrator | 2025-05-14 02:35:25 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:35:25.166000 | orchestrator | 2025-05-14 02:35:25 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:35:25.167147 | orchestrator | 2025-05-14 02:35:25 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:35:25.167892 | orchestrator | 2025-05-14 02:35:25 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state STARTED 2025-05-14 02:35:25.167932 | orchestrator | 2025-05-14 02:35:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:35:28.227016 | orchestrator | 2025-05-14 02:35:28 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:35:28.227815 | orchestrator | 2025-05-14 02:35:28 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:35:28.230503 | orchestrator | 2025-05-14 02:35:28 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:35:28.232250 | orchestrator | 2025-05-14 02:35:28 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state STARTED 2025-05-14 02:35:28.232295 | orchestrator | 2025-05-14 02:35:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:35:31.295307 | orchestrator | 2025-05-14 02:35:31 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:35:31.297829 | orchestrator | 2025-05-14 02:35:31 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:35:31.299374 | orchestrator | 2025-05-14 02:35:31 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:35:31.301199 | orchestrator | 2025-05-14 02:35:31 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state STARTED 2025-05-14 02:35:31.302271 | orchestrator | 2025-05-14 02:35:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:35:34.358863 | orchestrator | 2025-05-14 02:35:34 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:35:34.359301 | orchestrator | 2025-05-14 02:35:34 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:35:34.360509 | orchestrator | 2025-05-14 02:35:34 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:35:34.361959 | orchestrator | 2025-05-14 02:35:34 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state STARTED 2025-05-14 02:35:34.362001 | orchestrator | 2025-05-14 02:35:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:35:37.402278 | orchestrator | 2025-05-14 02:35:37 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:35:37.404364 | orchestrator | 2025-05-14 02:35:37 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:35:37.405311 | orchestrator | 2025-05-14 02:35:37 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:35:37.406736 | orchestrator | 2025-05-14 02:35:37 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state STARTED 2025-05-14 02:35:37.406765 | orchestrator | 2025-05-14 02:35:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:35:40.452231 | orchestrator | 2025-05-14 02:35:40 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:35:40.453484 | orchestrator | 2025-05-14 02:35:40 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:35:40.453527 | orchestrator | 2025-05-14 02:35:40 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:35:40.457076 | orchestrator | 2025-05-14 02:35:40 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state STARTED 2025-05-14 02:35:40.457995 | orchestrator | 2025-05-14 02:35:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:35:43.511862 | orchestrator | 2025-05-14 02:35:43 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:35:43.513258 | orchestrator | 2025-05-14 02:35:43 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:35:43.515962 | orchestrator | 2025-05-14 02:35:43 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:35:43.516941 | orchestrator | 2025-05-14 02:35:43 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state STARTED 2025-05-14 02:35:43.516966 | orchestrator | 2025-05-14 02:35:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:35:46.577410 | orchestrator | 2025-05-14 02:35:46 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:35:46.578187 | orchestrator | 2025-05-14 02:35:46 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:35:46.579678 | orchestrator | 2025-05-14 02:35:46 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:35:46.581492 | orchestrator | 2025-05-14 02:35:46 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state STARTED 2025-05-14 02:35:46.581526 | orchestrator | 2025-05-14 02:35:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:35:49.624496 | orchestrator | 2025-05-14 02:35:49 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:35:49.629072 | orchestrator | 2025-05-14 02:35:49 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:35:49.631578 | orchestrator | 2025-05-14 02:35:49 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:35:49.633405 | orchestrator | 2025-05-14 02:35:49 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state STARTED 2025-05-14 02:35:49.633444 | orchestrator | 2025-05-14 02:35:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:35:52.680250 | orchestrator | 2025-05-14 02:35:52 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:35:52.680385 | orchestrator | 2025-05-14 02:35:52 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:35:52.680449 | orchestrator | 2025-05-14 02:35:52 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:35:52.681451 | orchestrator | 2025-05-14 02:35:52 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state STARTED 2025-05-14 02:35:52.681559 | orchestrator | 2025-05-14 02:35:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:35:55.728216 | orchestrator | 2025-05-14 02:35:55 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:35:55.729696 | orchestrator | 2025-05-14 02:35:55 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:35:55.730350 | orchestrator | 2025-05-14 02:35:55 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:35:55.732366 | orchestrator | 2025-05-14 02:35:55 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state STARTED 2025-05-14 02:35:55.732409 | orchestrator | 2025-05-14 02:35:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:35:58.785361 | orchestrator | 2025-05-14 02:35:58 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:35:58.787258 | orchestrator | 2025-05-14 02:35:58 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:35:58.789304 | orchestrator | 2025-05-14 02:35:58 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:35:58.791090 | orchestrator | 2025-05-14 02:35:58 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state STARTED 2025-05-14 02:35:58.791130 | orchestrator | 2025-05-14 02:35:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:36:01.857516 | orchestrator | 2025-05-14 02:36:01 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:36:01.860211 | orchestrator | 2025-05-14 02:36:01 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:36:01.862184 | orchestrator | 2025-05-14 02:36:01 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:36:01.864148 | orchestrator | 2025-05-14 02:36:01 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state STARTED 2025-05-14 02:36:01.864205 | orchestrator | 2025-05-14 02:36:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:36:04.937934 | orchestrator | 2025-05-14 02:36:04 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:36:04.940083 | orchestrator | 2025-05-14 02:36:04 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:36:04.941292 | orchestrator | 2025-05-14 02:36:04 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:36:04.943803 | orchestrator | 2025-05-14 02:36:04 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state STARTED 2025-05-14 02:36:04.943855 | orchestrator | 2025-05-14 02:36:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:36:07.992419 | orchestrator | 2025-05-14 02:36:07 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:36:07.994884 | orchestrator | 2025-05-14 02:36:07 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:36:07.996325 | orchestrator | 2025-05-14 02:36:07 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:36:08.001435 | orchestrator | 2025-05-14 02:36:07 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state STARTED 2025-05-14 02:36:08.001476 | orchestrator | 2025-05-14 02:36:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:36:11.046657 | orchestrator | 2025-05-14 02:36:11 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:36:11.048420 | orchestrator | 2025-05-14 02:36:11 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:36:11.050448 | orchestrator | 2025-05-14 02:36:11 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:36:11.050795 | orchestrator | 2025-05-14 02:36:11 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state STARTED 2025-05-14 02:36:11.050826 | orchestrator | 2025-05-14 02:36:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:36:14.104453 | orchestrator | 2025-05-14 02:36:14 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:36:14.109036 | orchestrator | 2025-05-14 02:36:14 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:36:14.109085 | orchestrator | 2025-05-14 02:36:14 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:36:14.110575 | orchestrator | 2025-05-14 02:36:14 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state STARTED 2025-05-14 02:36:14.112597 | orchestrator | 2025-05-14 02:36:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:36:17.162931 | orchestrator | 2025-05-14 02:36:17 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:36:17.166930 | orchestrator | 2025-05-14 02:36:17 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:36:17.168867 | orchestrator | 2025-05-14 02:36:17 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:36:17.170837 | orchestrator | 2025-05-14 02:36:17 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state STARTED 2025-05-14 02:36:17.170915 | orchestrator | 2025-05-14 02:36:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:36:20.215100 | orchestrator | 2025-05-14 02:36:20 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:36:20.215707 | orchestrator | 2025-05-14 02:36:20 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:36:20.217209 | orchestrator | 2025-05-14 02:36:20 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:36:20.217760 | orchestrator | 2025-05-14 02:36:20 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state STARTED 2025-05-14 02:36:20.217776 | orchestrator | 2025-05-14 02:36:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:36:23.248775 | orchestrator | 2025-05-14 02:36:23 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:36:23.248884 | orchestrator | 2025-05-14 02:36:23 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:36:23.249617 | orchestrator | 2025-05-14 02:36:23 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:36:23.250342 | orchestrator | 2025-05-14 02:36:23 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state STARTED 2025-05-14 02:36:23.250356 | orchestrator | 2025-05-14 02:36:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:36:26.287266 | orchestrator | 2025-05-14 02:36:26 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:36:26.289008 | orchestrator | 2025-05-14 02:36:26 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:36:26.290386 | orchestrator | 2025-05-14 02:36:26 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:36:26.291956 | orchestrator | 2025-05-14 02:36:26 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state STARTED 2025-05-14 02:36:26.292070 | orchestrator | 2025-05-14 02:36:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:36:29.334349 | orchestrator | 2025-05-14 02:36:29 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:36:29.336312 | orchestrator | 2025-05-14 02:36:29 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:36:29.337808 | orchestrator | 2025-05-14 02:36:29 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:36:29.339790 | orchestrator | 2025-05-14 02:36:29 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state STARTED 2025-05-14 02:36:29.339847 | orchestrator | 2025-05-14 02:36:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:36:32.397567 | orchestrator | 2025-05-14 02:36:32 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:36:32.400106 | orchestrator | 2025-05-14 02:36:32 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:36:32.401923 | orchestrator | 2025-05-14 02:36:32 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:36:32.403965 | orchestrator | 2025-05-14 02:36:32 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state STARTED 2025-05-14 02:36:32.404008 | orchestrator | 2025-05-14 02:36:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:36:35.469361 | orchestrator | 2025-05-14 02:36:35 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:36:35.472190 | orchestrator | 2025-05-14 02:36:35 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:36:35.473564 | orchestrator | 2025-05-14 02:36:35 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:36:35.476059 | orchestrator | 2025-05-14 02:36:35 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state STARTED 2025-05-14 02:36:35.476108 | orchestrator | 2025-05-14 02:36:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:36:38.534568 | orchestrator | 2025-05-14 02:36:38 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:36:38.534853 | orchestrator | 2025-05-14 02:36:38 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:36:38.536801 | orchestrator | 2025-05-14 02:36:38 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:36:38.538441 | orchestrator | 2025-05-14 02:36:38 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state STARTED 2025-05-14 02:36:38.538552 | orchestrator | 2025-05-14 02:36:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:36:41.584725 | orchestrator | 2025-05-14 02:36:41 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:36:41.586398 | orchestrator | 2025-05-14 02:36:41 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:36:41.588011 | orchestrator | 2025-05-14 02:36:41 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:36:41.589300 | orchestrator | 2025-05-14 02:36:41 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state STARTED 2025-05-14 02:36:41.589570 | orchestrator | 2025-05-14 02:36:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:36:44.642676 | orchestrator | 2025-05-14 02:36:44 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:36:44.644270 | orchestrator | 2025-05-14 02:36:44 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:36:44.647228 | orchestrator | 2025-05-14 02:36:44 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:36:44.649920 | orchestrator | 2025-05-14 02:36:44 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state STARTED 2025-05-14 02:36:44.649983 | orchestrator | 2025-05-14 02:36:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:36:47.702753 | orchestrator | 2025-05-14 02:36:47 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:36:47.703537 | orchestrator | 2025-05-14 02:36:47 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:36:47.705105 | orchestrator | 2025-05-14 02:36:47 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:36:47.710431 | orchestrator | 2025-05-14 02:36:47 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state STARTED 2025-05-14 02:36:47.710726 | orchestrator | 2025-05-14 02:36:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:36:50.753063 | orchestrator | 2025-05-14 02:36:50 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:36:50.753386 | orchestrator | 2025-05-14 02:36:50 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:36:50.754257 | orchestrator | 2025-05-14 02:36:50 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:36:50.755560 | orchestrator | 2025-05-14 02:36:50 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state STARTED 2025-05-14 02:36:50.757097 | orchestrator | 2025-05-14 02:36:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:36:53.811262 | orchestrator | 2025-05-14 02:36:53 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:36:53.812517 | orchestrator | 2025-05-14 02:36:53 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:36:53.814426 | orchestrator | 2025-05-14 02:36:53 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:36:53.817185 | orchestrator | 2025-05-14 02:36:53 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state STARTED 2025-05-14 02:36:53.817260 | orchestrator | 2025-05-14 02:36:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:36:56.868576 | orchestrator | 2025-05-14 02:36:56 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:36:56.869107 | orchestrator | 2025-05-14 02:36:56 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:36:56.872095 | orchestrator | 2025-05-14 02:36:56 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:36:56.874529 | orchestrator | 2025-05-14 02:36:56 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state STARTED 2025-05-14 02:36:56.874581 | orchestrator | 2025-05-14 02:36:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:36:59.922710 | orchestrator | 2025-05-14 02:36:59 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:36:59.928493 | orchestrator | 2025-05-14 02:36:59 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:36:59.930116 | orchestrator | 2025-05-14 02:36:59 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:36:59.932549 | orchestrator | 2025-05-14 02:36:59 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state STARTED 2025-05-14 02:36:59.932580 | orchestrator | 2025-05-14 02:36:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:37:02.978164 | orchestrator | 2025-05-14 02:37:02 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:37:02.980280 | orchestrator | 2025-05-14 02:37:02 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:37:02.981522 | orchestrator | 2025-05-14 02:37:02 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:37:02.983004 | orchestrator | 2025-05-14 02:37:02 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state STARTED 2025-05-14 02:37:02.983036 | orchestrator | 2025-05-14 02:37:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:37:06.029808 | orchestrator | 2025-05-14 02:37:06 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:37:06.030714 | orchestrator | 2025-05-14 02:37:06 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:37:06.032471 | orchestrator | 2025-05-14 02:37:06 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:37:06.034137 | orchestrator | 2025-05-14 02:37:06 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state STARTED 2025-05-14 02:37:06.034176 | orchestrator | 2025-05-14 02:37:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:37:09.072756 | orchestrator | 2025-05-14 02:37:09 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:37:09.076935 | orchestrator | 2025-05-14 02:37:09 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:37:09.076994 | orchestrator | 2025-05-14 02:37:09 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:37:09.077678 | orchestrator | 2025-05-14 02:37:09 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state STARTED 2025-05-14 02:37:09.078124 | orchestrator | 2025-05-14 02:37:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:37:12.115965 | orchestrator | 2025-05-14 02:37:12 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:37:12.119777 | orchestrator | 2025-05-14 02:37:12 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:37:12.120650 | orchestrator | 2025-05-14 02:37:12 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:37:12.121892 | orchestrator | 2025-05-14 02:37:12 | INFO  | Task 3ef4b4ad-9df3-41d1-bd69-da9ee69ac5d7 is in state SUCCESS 2025-05-14 02:37:12.121918 | orchestrator | 2025-05-14 02:37:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:37:12.123323 | orchestrator | 2025-05-14 02:37:12.123395 | orchestrator | 2025-05-14 02:37:12.123409 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:37:12.123421 | orchestrator | 2025-05-14 02:37:12.123432 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:37:12.123443 | orchestrator | Wednesday 14 May 2025 02:34:53 +0000 (0:00:00.330) 0:00:00.330 ********* 2025-05-14 02:37:12.123454 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:12.123466 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:37:12.123477 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:37:12.123487 | orchestrator | 2025-05-14 02:37:12.123515 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:37:12.123526 | orchestrator | Wednesday 14 May 2025 02:34:53 +0000 (0:00:00.457) 0:00:00.788 ********* 2025-05-14 02:37:12.123538 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-05-14 02:37:12.123549 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-05-14 02:37:12.123559 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-05-14 02:37:12.123570 | orchestrator | 2025-05-14 02:37:12.123580 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-05-14 02:37:12.123632 | orchestrator | 2025-05-14 02:37:12.123645 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-14 02:37:12.123656 | orchestrator | Wednesday 14 May 2025 02:34:54 +0000 (0:00:00.322) 0:00:01.110 ********* 2025-05-14 02:37:12.123690 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:37:12.123702 | orchestrator | 2025-05-14 02:37:12.123713 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-05-14 02:37:12.123723 | orchestrator | Wednesday 14 May 2025 02:34:54 +0000 (0:00:00.775) 0:00:01.886 ********* 2025-05-14 02:37:12.123734 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-14 02:37:12.123745 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-14 02:37:12.123756 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-14 02:37:12.123767 | orchestrator | 2025-05-14 02:37:12.123777 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-05-14 02:37:12.123788 | orchestrator | Wednesday 14 May 2025 02:34:55 +0000 (0:00:00.858) 0:00:02.744 ********* 2025-05-14 02:37:12.123803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-14 02:37:12.123819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-14 02:37:12.123845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-14 02:37:12.123877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-14 02:37:12.123916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-14 02:37:12.123935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-14 02:37:12.123949 | orchestrator | 2025-05-14 02:37:12.123961 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-14 02:37:12.123974 | orchestrator | Wednesday 14 May 2025 02:34:57 +0000 (0:00:01.377) 0:00:04.122 ********* 2025-05-14 02:37:12.123986 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:37:12.123999 | orchestrator | 2025-05-14 02:37:12.124011 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-05-14 02:37:12.124023 | orchestrator | Wednesday 14 May 2025 02:34:57 +0000 (0:00:00.548) 0:00:04.671 ********* 2025-05-14 02:37:12.124049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-14 02:37:12.124069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-14 02:37:12.124081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-14 02:37:12.124093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-14 02:37:12.124113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-14 02:37:12.124138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-14 02:37:12.124150 | orchestrator | 2025-05-14 02:37:12.124161 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-05-14 02:37:12.124172 | orchestrator | Wednesday 14 May 2025 02:35:00 +0000 (0:00:03.114) 0:00:07.785 ********* 2025-05-14 02:37:12.124183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-14 02:37:12.124196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-14 02:37:12.124208 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:12.124226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-14 02:37:12.124250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-14 02:37:12.124263 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:12.124275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-14 02:37:12.124287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-14 02:37:12.124298 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:12.124309 | orchestrator | 2025-05-14 02:37:12.124320 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-05-14 02:37:12.124331 | orchestrator | Wednesday 14 May 2025 02:35:01 +0000 (0:00:00.892) 0:00:08.678 ********* 2025-05-14 02:37:12.124349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-14 02:37:12.124372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-14 02:37:12.124384 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:12.124395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-14 02:37:12.124407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-14 02:37:12.124419 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:12.124435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-14 02:37:12.124460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-14 02:37:12.124472 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:12.124483 | orchestrator | 2025-05-14 02:37:12.124494 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-05-14 02:37:12.124504 | orchestrator | Wednesday 14 May 2025 02:35:02 +0000 (0:00:01.096) 0:00:09.775 ********* 2025-05-14 02:37:12.124515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-14 02:37:12.124527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-14 02:37:12.124538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-14 02:37:12.124589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-14 02:37:12.124632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-14 02:37:12.124644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-14 02:37:12.124656 | orchestrator | 2025-05-14 02:37:12.124668 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-05-14 02:37:12.124679 | orchestrator | Wednesday 14 May 2025 02:35:05 +0000 (0:00:02.874) 0:00:12.649 ********* 2025-05-14 02:37:12.124697 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:12.124708 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:37:12.124719 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:37:12.124729 | orchestrator | 2025-05-14 02:37:12.124740 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-05-14 02:37:12.124751 | orchestrator | Wednesday 14 May 2025 02:35:10 +0000 (0:00:04.487) 0:00:17.137 ********* 2025-05-14 02:37:12.124762 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:12.124772 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:37:12.124783 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:37:12.124794 | orchestrator | 2025-05-14 02:37:12.124805 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-05-14 02:37:12.124816 | orchestrator | Wednesday 14 May 2025 02:35:12 +0000 (0:00:02.353) 0:00:19.491 ********* 2025-05-14 02:37:12.125074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-14 02:37:12.125097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-14 02:37:12.125109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-14 02:37:12.125121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-14 02:37:12.125151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-14 02:37:12.125169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-14 02:37:12.125181 | orchestrator | 2025-05-14 02:37:12.125192 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-14 02:37:12.125203 | orchestrator | Wednesday 14 May 2025 02:35:15 +0000 (0:00:03.176) 0:00:22.667 ********* 2025-05-14 02:37:12.125214 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:12.125225 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:37:12.125235 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:37:12.125246 | orchestrator | 2025-05-14 02:37:12.125257 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-14 02:37:12.125268 | orchestrator | Wednesday 14 May 2025 02:35:15 +0000 (0:00:00.244) 0:00:22.911 ********* 2025-05-14 02:37:12.125278 | orchestrator | 2025-05-14 02:37:12.125289 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-14 02:37:12.125300 | orchestrator | Wednesday 14 May 2025 02:35:16 +0000 (0:00:00.145) 0:00:23.057 ********* 2025-05-14 02:37:12.125310 | orchestrator | 2025-05-14 02:37:12.125321 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-14 02:37:12.125331 | orchestrator | Wednesday 14 May 2025 02:35:16 +0000 (0:00:00.049) 0:00:23.107 ********* 2025-05-14 02:37:12.125369 | orchestrator | 2025-05-14 02:37:12.125380 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-05-14 02:37:12.125391 | orchestrator | Wednesday 14 May 2025 02:35:16 +0000 (0:00:00.053) 0:00:23.160 ********* 2025-05-14 02:37:12.125409 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:12.125419 | orchestrator | 2025-05-14 02:37:12.125430 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-05-14 02:37:12.125440 | orchestrator | Wednesday 14 May 2025 02:35:16 +0000 (0:00:00.155) 0:00:23.316 ********* 2025-05-14 02:37:12.125451 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:37:12.125461 | orchestrator | 2025-05-14 02:37:12.125472 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-05-14 02:37:12.125483 | orchestrator | Wednesday 14 May 2025 02:35:16 +0000 (0:00:00.366) 0:00:23.682 ********* 2025-05-14 02:37:12.125493 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:12.125504 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:37:12.125515 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:37:12.125525 | orchestrator | 2025-05-14 02:37:12.125536 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-05-14 02:37:12.125547 | orchestrator | Wednesday 14 May 2025 02:35:55 +0000 (0:00:38.784) 0:01:02.467 ********* 2025-05-14 02:37:12.125557 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:12.125568 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:37:12.125578 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:37:12.125589 | orchestrator | 2025-05-14 02:37:12.125629 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-14 02:37:12.125641 | orchestrator | Wednesday 14 May 2025 02:36:57 +0000 (0:01:02.070) 0:02:04.537 ********* 2025-05-14 02:37:12.125655 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:37:12.125668 | orchestrator | 2025-05-14 02:37:12.125680 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-05-14 02:37:12.125692 | orchestrator | Wednesday 14 May 2025 02:36:58 +0000 (0:00:00.729) 0:02:05.267 ********* 2025-05-14 02:37:12.125704 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:12.125717 | orchestrator | 2025-05-14 02:37:12.125729 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-05-14 02:37:12.125741 | orchestrator | Wednesday 14 May 2025 02:37:00 +0000 (0:00:02.617) 0:02:07.884 ********* 2025-05-14 02:37:12.125754 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:37:12.125766 | orchestrator | 2025-05-14 02:37:12.125778 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-05-14 02:37:12.125791 | orchestrator | Wednesday 14 May 2025 02:37:03 +0000 (0:00:02.447) 0:02:10.332 ********* 2025-05-14 02:37:12.125804 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:12.125816 | orchestrator | 2025-05-14 02:37:12.125828 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-05-14 02:37:12.125840 | orchestrator | Wednesday 14 May 2025 02:37:06 +0000 (0:00:03.052) 0:02:13.384 ********* 2025-05-14 02:37:12.125852 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:37:12.125864 | orchestrator | 2025-05-14 02:37:12.125883 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:37:12.125897 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-14 02:37:12.125911 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 02:37:12.125929 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-14 02:37:12.125941 | orchestrator | 2025-05-14 02:37:12.125953 | orchestrator | 2025-05-14 02:37:12.125965 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:37:12.125978 | orchestrator | Wednesday 14 May 2025 02:37:09 +0000 (0:00:02.945) 0:02:16.330 ********* 2025-05-14 02:37:12.125990 | orchestrator | =============================================================================== 2025-05-14 02:37:12.126011 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 62.07s 2025-05-14 02:37:12.126078 | orchestrator | opensearch : Restart opensearch container ------------------------------ 38.78s 2025-05-14 02:37:12.126090 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 4.49s 2025-05-14 02:37:12.126106 | orchestrator | opensearch : Check opensearch containers -------------------------------- 3.18s 2025-05-14 02:37:12.126117 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.11s 2025-05-14 02:37:12.126127 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.05s 2025-05-14 02:37:12.126138 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.95s 2025-05-14 02:37:12.126148 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.87s 2025-05-14 02:37:12.126159 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.62s 2025-05-14 02:37:12.126170 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.45s 2025-05-14 02:37:12.126180 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.35s 2025-05-14 02:37:12.126191 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.38s 2025-05-14 02:37:12.126201 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.10s 2025-05-14 02:37:12.126212 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.89s 2025-05-14 02:37:12.126222 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.86s 2025-05-14 02:37:12.126233 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.78s 2025-05-14 02:37:12.126243 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.73s 2025-05-14 02:37:12.126254 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.55s 2025-05-14 02:37:12.126264 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.46s 2025-05-14 02:37:12.126275 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.37s 2025-05-14 02:37:15.179571 | orchestrator | 2025-05-14 02:37:15 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:37:15.180401 | orchestrator | 2025-05-14 02:37:15 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:37:15.181534 | orchestrator | 2025-05-14 02:37:15 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:37:15.181576 | orchestrator | 2025-05-14 02:37:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:37:18.223536 | orchestrator | 2025-05-14 02:37:18 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:37:18.223958 | orchestrator | 2025-05-14 02:37:18 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:37:18.225345 | orchestrator | 2025-05-14 02:37:18 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:37:18.225388 | orchestrator | 2025-05-14 02:37:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:37:21.273050 | orchestrator | 2025-05-14 02:37:21 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:37:21.274860 | orchestrator | 2025-05-14 02:37:21 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:37:21.276336 | orchestrator | 2025-05-14 02:37:21 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:37:21.276518 | orchestrator | 2025-05-14 02:37:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:37:24.323447 | orchestrator | 2025-05-14 02:37:24 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:37:24.325785 | orchestrator | 2025-05-14 02:37:24 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:37:24.327359 | orchestrator | 2025-05-14 02:37:24 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:37:24.327391 | orchestrator | 2025-05-14 02:37:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:37:27.379943 | orchestrator | 2025-05-14 02:37:27 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:37:27.381251 | orchestrator | 2025-05-14 02:37:27 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:37:27.383635 | orchestrator | 2025-05-14 02:37:27 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:37:27.384247 | orchestrator | 2025-05-14 02:37:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:37:30.437772 | orchestrator | 2025-05-14 02:37:30 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:37:30.440096 | orchestrator | 2025-05-14 02:37:30 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:37:30.441390 | orchestrator | 2025-05-14 02:37:30 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:37:30.441431 | orchestrator | 2025-05-14 02:37:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:37:33.493873 | orchestrator | 2025-05-14 02:37:33 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:37:33.495312 | orchestrator | 2025-05-14 02:37:33 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:37:33.497970 | orchestrator | 2025-05-14 02:37:33 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:37:33.498012 | orchestrator | 2025-05-14 02:37:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:37:36.550566 | orchestrator | 2025-05-14 02:37:36 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:37:36.555067 | orchestrator | 2025-05-14 02:37:36 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:37:36.557705 | orchestrator | 2025-05-14 02:37:36 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:37:36.557770 | orchestrator | 2025-05-14 02:37:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:37:39.595682 | orchestrator | 2025-05-14 02:37:39 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:37:39.596880 | orchestrator | 2025-05-14 02:37:39 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:37:39.598735 | orchestrator | 2025-05-14 02:37:39 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:37:39.598766 | orchestrator | 2025-05-14 02:37:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:37:42.652369 | orchestrator | 2025-05-14 02:37:42 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:37:42.654776 | orchestrator | 2025-05-14 02:37:42 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:37:42.657801 | orchestrator | 2025-05-14 02:37:42 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:37:42.657844 | orchestrator | 2025-05-14 02:37:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:37:45.706809 | orchestrator | 2025-05-14 02:37:45 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:37:45.708550 | orchestrator | 2025-05-14 02:37:45 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:37:45.709796 | orchestrator | 2025-05-14 02:37:45 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:37:45.709827 | orchestrator | 2025-05-14 02:37:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:37:48.761965 | orchestrator | 2025-05-14 02:37:48 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:37:48.764294 | orchestrator | 2025-05-14 02:37:48 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:37:48.766281 | orchestrator | 2025-05-14 02:37:48 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:37:48.766330 | orchestrator | 2025-05-14 02:37:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:37:51.827349 | orchestrator | 2025-05-14 02:37:51 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:37:51.829375 | orchestrator | 2025-05-14 02:37:51 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:37:51.832007 | orchestrator | 2025-05-14 02:37:51 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:37:51.832409 | orchestrator | 2025-05-14 02:37:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:37:54.890394 | orchestrator | 2025-05-14 02:37:54 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:37:54.891946 | orchestrator | 2025-05-14 02:37:54 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:37:54.893435 | orchestrator | 2025-05-14 02:37:54 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:37:54.893481 | orchestrator | 2025-05-14 02:37:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:37:57.933969 | orchestrator | 2025-05-14 02:37:57 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:37:57.935299 | orchestrator | 2025-05-14 02:37:57 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:37:57.938359 | orchestrator | 2025-05-14 02:37:57 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:37:57.938875 | orchestrator | 2025-05-14 02:37:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:38:00.985710 | orchestrator | 2025-05-14 02:38:00 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:38:00.985981 | orchestrator | 2025-05-14 02:38:00 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:38:00.987431 | orchestrator | 2025-05-14 02:38:00 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:38:00.987469 | orchestrator | 2025-05-14 02:38:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:38:04.037113 | orchestrator | 2025-05-14 02:38:04 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:38:04.038358 | orchestrator | 2025-05-14 02:38:04 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:38:04.039866 | orchestrator | 2025-05-14 02:38:04 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:38:04.039939 | orchestrator | 2025-05-14 02:38:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:38:07.092933 | orchestrator | 2025-05-14 02:38:07 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:38:07.094175 | orchestrator | 2025-05-14 02:38:07 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:38:07.094244 | orchestrator | 2025-05-14 02:38:07 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:38:07.094286 | orchestrator | 2025-05-14 02:38:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:38:10.146138 | orchestrator | 2025-05-14 02:38:10 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:38:10.146745 | orchestrator | 2025-05-14 02:38:10 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:38:10.150572 | orchestrator | 2025-05-14 02:38:10 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state STARTED 2025-05-14 02:38:10.150712 | orchestrator | 2025-05-14 02:38:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:38:13.209467 | orchestrator | 2025-05-14 02:38:13 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:38:13.210317 | orchestrator | 2025-05-14 02:38:13 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:38:13.213028 | orchestrator | 2025-05-14 02:38:13 | INFO  | Task 753cab8f-c082-417f-8245-f75869be1e7a is in state STARTED 2025-05-14 02:38:13.219524 | orchestrator | 2025-05-14 02:38:13.219579 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-14 02:38:13.219621 | orchestrator | 2025-05-14 02:38:13.219628 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-05-14 02:38:13.219633 | orchestrator | 2025-05-14 02:38:13.219638 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-05-14 02:38:13.219644 | orchestrator | Wednesday 14 May 2025 02:24:54 +0000 (0:00:02.121) 0:00:02.121 ********* 2025-05-14 02:38:13.219650 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:38:13.219656 | orchestrator | 2025-05-14 02:38:13.219661 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-05-14 02:38:13.219666 | orchestrator | Wednesday 14 May 2025 02:24:56 +0000 (0:00:01.499) 0:00:03.621 ********* 2025-05-14 02:38:13.219671 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 02:38:13.219676 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-05-14 02:38:13.219681 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-05-14 02:38:13.219685 | orchestrator | 2025-05-14 02:38:13.219690 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-05-14 02:38:13.219694 | orchestrator | Wednesday 14 May 2025 02:24:57 +0000 (0:00:00.793) 0:00:04.415 ********* 2025-05-14 02:38:13.219699 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:38:13.219704 | orchestrator | 2025-05-14 02:38:13.219721 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-05-14 02:38:13.219726 | orchestrator | Wednesday 14 May 2025 02:24:58 +0000 (0:00:01.380) 0:00:05.796 ********* 2025-05-14 02:38:13.219730 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.219735 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.219739 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.219744 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.219748 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.219753 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.219757 | orchestrator | 2025-05-14 02:38:13.219762 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-05-14 02:38:13.219766 | orchestrator | Wednesday 14 May 2025 02:24:59 +0000 (0:00:01.259) 0:00:07.055 ********* 2025-05-14 02:38:13.219771 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.219775 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.219779 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.219784 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.219788 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.219807 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.219812 | orchestrator | 2025-05-14 02:38:13.219817 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-05-14 02:38:13.219822 | orchestrator | Wednesday 14 May 2025 02:25:00 +0000 (0:00:01.021) 0:00:08.076 ********* 2025-05-14 02:38:13.219829 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.219836 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.219843 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.219852 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.219863 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.219874 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.219881 | orchestrator | 2025-05-14 02:38:13.219888 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-05-14 02:38:13.219895 | orchestrator | Wednesday 14 May 2025 02:25:01 +0000 (0:00:01.177) 0:00:09.254 ********* 2025-05-14 02:38:13.219902 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.219909 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.219917 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.219924 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.219931 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.219938 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.219944 | orchestrator | 2025-05-14 02:38:13.219951 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-05-14 02:38:13.219958 | orchestrator | Wednesday 14 May 2025 02:25:03 +0000 (0:00:01.156) 0:00:10.411 ********* 2025-05-14 02:38:13.219964 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.219971 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.219978 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.219985 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.219992 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.219999 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.220005 | orchestrator | 2025-05-14 02:38:13.220012 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-05-14 02:38:13.220019 | orchestrator | Wednesday 14 May 2025 02:25:03 +0000 (0:00:00.732) 0:00:11.143 ********* 2025-05-14 02:38:13.220026 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.220033 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.220039 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.220046 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.220053 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.220061 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.220068 | orchestrator | 2025-05-14 02:38:13.220075 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-05-14 02:38:13.220137 | orchestrator | Wednesday 14 May 2025 02:25:05 +0000 (0:00:01.193) 0:00:12.337 ********* 2025-05-14 02:38:13.220145 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.220151 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.220157 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.220162 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.220167 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.220172 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.220178 | orchestrator | 2025-05-14 02:38:13.220183 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-05-14 02:38:13.220217 | orchestrator | Wednesday 14 May 2025 02:25:06 +0000 (0:00:01.044) 0:00:13.381 ********* 2025-05-14 02:38:13.220224 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.220229 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.220235 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.220241 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.220246 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.220252 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.220258 | orchestrator | 2025-05-14 02:38:13.220276 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-14 02:38:13.220282 | orchestrator | Wednesday 14 May 2025 02:25:07 +0000 (0:00:01.061) 0:00:14.442 ********* 2025-05-14 02:38:13.220288 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 02:38:13.220301 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-14 02:38:13.220307 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-14 02:38:13.220313 | orchestrator | 2025-05-14 02:38:13.220327 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-05-14 02:38:13.220333 | orchestrator | Wednesday 14 May 2025 02:25:08 +0000 (0:00:00.939) 0:00:15.382 ********* 2025-05-14 02:38:13.220339 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.220345 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.220351 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.220357 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.220363 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.220369 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.220374 | orchestrator | 2025-05-14 02:38:13.220380 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-05-14 02:38:13.220386 | orchestrator | Wednesday 14 May 2025 02:25:09 +0000 (0:00:01.808) 0:00:17.190 ********* 2025-05-14 02:38:13.220392 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 02:38:13.220398 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-14 02:38:13.220404 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-14 02:38:13.220409 | orchestrator | 2025-05-14 02:38:13.220414 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-05-14 02:38:13.220425 | orchestrator | Wednesday 14 May 2025 02:25:12 +0000 (0:00:03.067) 0:00:20.258 ********* 2025-05-14 02:38:13.220430 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-14 02:38:13.220435 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-14 02:38:13.220440 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-14 02:38:13.220445 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.220450 | orchestrator | 2025-05-14 02:38:13.220455 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-05-14 02:38:13.220460 | orchestrator | Wednesday 14 May 2025 02:25:13 +0000 (0:00:00.722) 0:00:20.981 ********* 2025-05-14 02:38:13.220467 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-14 02:38:13.220474 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-14 02:38:13.220480 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-14 02:38:13.220485 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.220490 | orchestrator | 2025-05-14 02:38:13.220496 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-05-14 02:38:13.220501 | orchestrator | Wednesday 14 May 2025 02:25:14 +0000 (0:00:00.797) 0:00:21.778 ********* 2025-05-14 02:38:13.220508 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-14 02:38:13.220516 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-14 02:38:13.220527 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-14 02:38:13.220532 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.220537 | orchestrator | 2025-05-14 02:38:13.220542 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-05-14 02:38:13.220552 | orchestrator | Wednesday 14 May 2025 02:25:14 +0000 (0:00:00.147) 0:00:21.926 ********* 2025-05-14 02:38:13.220559 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-14 02:25:10.822758', 'end': '2025-05-14 02:25:11.077372', 'delta': '0:00:00.254614', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-14 02:38:13.220569 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-14 02:25:11.636999', 'end': '2025-05-14 02:25:11.885630', 'delta': '0:00:00.248631', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-14 02:38:13.220575 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-14 02:25:12.427507', 'end': '2025-05-14 02:25:12.806038', 'delta': '0:00:00.378531', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-14 02:38:13.220580 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.220603 | orchestrator | 2025-05-14 02:38:13.220609 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-05-14 02:38:13.220614 | orchestrator | Wednesday 14 May 2025 02:25:14 +0000 (0:00:00.207) 0:00:22.134 ********* 2025-05-14 02:38:13.220619 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.220624 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.220629 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.220634 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.220639 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.220644 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.220649 | orchestrator | 2025-05-14 02:38:13.220654 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-05-14 02:38:13.220664 | orchestrator | Wednesday 14 May 2025 02:25:16 +0000 (0:00:01.937) 0:00:24.071 ********* 2025-05-14 02:38:13.220669 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.220674 | orchestrator | 2025-05-14 02:38:13.220679 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-05-14 02:38:13.220683 | orchestrator | Wednesday 14 May 2025 02:25:17 +0000 (0:00:00.693) 0:00:24.765 ********* 2025-05-14 02:38:13.220688 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.220694 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.220698 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.220703 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.220708 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.220713 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.220718 | orchestrator | 2025-05-14 02:38:13.220723 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-05-14 02:38:13.220728 | orchestrator | Wednesday 14 May 2025 02:25:18 +0000 (0:00:00.820) 0:00:25.586 ********* 2025-05-14 02:38:13.220733 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.220738 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.220743 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.220747 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.220752 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.220757 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.220762 | orchestrator | 2025-05-14 02:38:13.220767 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-14 02:38:13.220772 | orchestrator | Wednesday 14 May 2025 02:25:20 +0000 (0:00:02.402) 0:00:27.988 ********* 2025-05-14 02:38:13.220777 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.220782 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.220787 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.220792 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.220797 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.220801 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.220806 | orchestrator | 2025-05-14 02:38:13.220811 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-05-14 02:38:13.220817 | orchestrator | Wednesday 14 May 2025 02:25:21 +0000 (0:00:00.884) 0:00:28.873 ********* 2025-05-14 02:38:13.220825 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.220830 | orchestrator | 2025-05-14 02:38:13.220835 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-05-14 02:38:13.220840 | orchestrator | Wednesday 14 May 2025 02:25:22 +0000 (0:00:00.489) 0:00:29.362 ********* 2025-05-14 02:38:13.220845 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.220850 | orchestrator | 2025-05-14 02:38:13.220855 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-14 02:38:13.220860 | orchestrator | Wednesday 14 May 2025 02:25:22 +0000 (0:00:00.338) 0:00:29.701 ********* 2025-05-14 02:38:13.220865 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.220870 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.220875 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.220880 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.220885 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.220890 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.220894 | orchestrator | 2025-05-14 02:38:13.220899 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-05-14 02:38:13.220904 | orchestrator | Wednesday 14 May 2025 02:25:23 +0000 (0:00:01.517) 0:00:31.219 ********* 2025-05-14 02:38:13.220909 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.220914 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.220919 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.220924 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.220929 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.220934 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.220939 | orchestrator | 2025-05-14 02:38:13.220950 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-05-14 02:38:13.220955 | orchestrator | Wednesday 14 May 2025 02:25:25 +0000 (0:00:01.544) 0:00:32.763 ********* 2025-05-14 02:38:13.220960 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.220965 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.220970 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.220978 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.220983 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.220988 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.220993 | orchestrator | 2025-05-14 02:38:13.220998 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-05-14 02:38:13.221003 | orchestrator | Wednesday 14 May 2025 02:25:26 +0000 (0:00:01.282) 0:00:34.045 ********* 2025-05-14 02:38:13.221008 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.221013 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.221060 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.221066 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.221071 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.221076 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.221081 | orchestrator | 2025-05-14 02:38:13.221086 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-05-14 02:38:13.221091 | orchestrator | Wednesday 14 May 2025 02:25:28 +0000 (0:00:01.301) 0:00:35.346 ********* 2025-05-14 02:38:13.221096 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.221101 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.221106 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.221111 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.221116 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.221121 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.221126 | orchestrator | 2025-05-14 02:38:13.221131 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-05-14 02:38:13.221136 | orchestrator | Wednesday 14 May 2025 02:25:28 +0000 (0:00:00.720) 0:00:36.067 ********* 2025-05-14 02:38:13.221141 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.221146 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.221150 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.221155 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.221160 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.221165 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.221170 | orchestrator | 2025-05-14 02:38:13.221175 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-14 02:38:13.221180 | orchestrator | Wednesday 14 May 2025 02:25:29 +0000 (0:00:01.066) 0:00:37.134 ********* 2025-05-14 02:38:13.221185 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.221190 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.221195 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.221200 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.221205 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.221210 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.221215 | orchestrator | 2025-05-14 02:38:13.221220 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-05-14 02:38:13.221225 | orchestrator | Wednesday 14 May 2025 02:25:30 +0000 (0:00:00.863) 0:00:37.998 ********* 2025-05-14 02:38:13.221231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d6958c45-3c69-4688-be65-10947b181749', 'scsi-SQEMU_QEMU_HARDDISK_d6958c45-3c69-4688-be65-10947b181749'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d6958c45-3c69-4688-be65-10947b181749-part1', 'scsi-SQEMU_QEMU_HARDDISK_d6958c45-3c69-4688-be65-10947b181749-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d6958c45-3c69-4688-be65-10947b181749-part14', 'scsi-SQEMU_QEMU_HARDDISK_d6958c45-3c69-4688-be65-10947b181749-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d6958c45-3c69-4688-be65-10947b181749-part15', 'scsi-SQEMU_QEMU_HARDDISK_d6958c45-3c69-4688-be65-10947b181749-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d6958c45-3c69-4688-be65-10947b181749-part16', 'scsi-SQEMU_QEMU_HARDDISK_d6958c45-3c69-4688-be65-10947b181749-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:38:13.221306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-14-01-42-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:38:13.221313 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.221321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_74bbaa4f-bef6-4c72-86a9-a51ae58fe1ae', 'scsi-SQEMU_QEMU_HARDDISK_74bbaa4f-bef6-4c72-86a9-a51ae58fe1ae'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_74bbaa4f-bef6-4c72-86a9-a51ae58fe1ae-part1', 'scsi-SQEMU_QEMU_HARDDISK_74bbaa4f-bef6-4c72-86a9-a51ae58fe1ae-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_74bbaa4f-bef6-4c72-86a9-a51ae58fe1ae-part14', 'scsi-SQEMU_QEMU_HARDDISK_74bbaa4f-bef6-4c72-86a9-a51ae58fe1ae-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_74bbaa4f-bef6-4c72-86a9-a51ae58fe1ae-part15', 'scsi-SQEMU_QEMU_HARDDISK_74bbaa4f-bef6-4c72-86a9-a51ae58fe1ae-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_74bbaa4f-bef6-4c72-86a9-a51ae58fe1ae-part16', 'scsi-SQEMU_QEMU_HARDDISK_74bbaa4f-bef6-4c72-86a9-a51ae58fe1ae-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:38:13.221385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-14-01-42-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:38:13.221390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221413 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.221418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221441 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--cb58592c--122c--52e3--870d--c9748cfaa53d-osd--block--cb58592c--122c--52e3--870d--c9748cfaa53d', 'dm-uuid-LVM-tHFsPa1Zsw1yoNENjzY3utZu0eTPYbS4dJAmJocSDNoO6F4fb16ndXk314plfCdR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221448 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b14ae20f--13fb--53c3--906d--34f9f68040ad-osd--block--b14ae20f--13fb--53c3--906d--34f9f68040ad', 'dm-uuid-LVM-bOkJVLp7SZmvorSx9c6SShTcOJL7GkIA1I1O9R0OzDiXPssI7LdzI7YYonqs4jBz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221458 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221678 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [],2025-05-14 02:38:13 | INFO  | Task 4f18501c-1ddd-4ef0-a495-7403d07898f9 is in state SUCCESS 2025-05-14 02:38:13.221689 | orchestrator | 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ab8a1a8-ea98-4f35-a272-b18ca435ba8e', 'scsi-SQEMU_QEMU_HARDDISK_7ab8a1a8-ea98-4f35-a272-b18ca435ba8e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ab8a1a8-ea98-4f35-a272-b18ca435ba8e-part1', 'scsi-SQEMU_QEMU_HARDDISK_7ab8a1a8-ea98-4f35-a272-b18ca435ba8e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ab8a1a8-ea98-4f35-a272-b18ca435ba8e-part14', 'scsi-SQEMU_QEMU_HARDDISK_7ab8a1a8-ea98-4f35-a272-b18ca435ba8e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ab8a1a8-ea98-4f35-a272-b18ca435ba8e-part15', 'scsi-SQEMU_QEMU_HARDDISK_7ab8a1a8-ea98-4f35-a272-b18ca435ba8e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7ab8a1a8-ea98-4f35-a272-b18ca435ba8e-part16', 'scsi-SQEMU_QEMU_HARDDISK_7ab8a1a8-ea98-4f35-a272-b18ca435ba8e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:38:13.221706 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221717 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-14-01-42-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:38:13.221733 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221739 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221745 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221751 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.221760 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221766 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4cd3396-0a08-4c9a-a600-88a027dd3314', 'scsi-SQEMU_QEMU_HARDDISK_f4cd3396-0a08-4c9a-a600-88a027dd3314'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4cd3396-0a08-4c9a-a600-88a027dd3314-part1', 'scsi-SQEMU_QEMU_HARDDISK_f4cd3396-0a08-4c9a-a600-88a027dd3314-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4cd3396-0a08-4c9a-a600-88a027dd3314-part14', 'scsi-SQEMU_QEMU_HARDDISK_f4cd3396-0a08-4c9a-a600-88a027dd3314-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4cd3396-0a08-4c9a-a600-88a027dd3314-part15', 'scsi-SQEMU_QEMU_HARDDISK_f4cd3396-0a08-4c9a-a600-88a027dd3314-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4cd3396-0a08-4c9a-a600-88a027dd3314-part16', 'scsi-SQEMU_QEMU_HARDDISK_f4cd3396-0a08-4c9a-a600-88a027dd3314-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:38:13.221781 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--cb58592c--122c--52e3--870d--c9748cfaa53d-osd--block--cb58592c--122c--52e3--870d--c9748cfaa53d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ITJGCf-nQY9-xUaX-udKR-CHjO-rooj-hoHY6i', 'scsi-0QEMU_QEMU_HARDDISK_1098e660-21c4-40f1-8a57-5405cc8713a2', 'scsi-SQEMU_QEMU_HARDDISK_1098e660-21c4-40f1-8a57-5405cc8713a2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:38:13.221788 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b14ae20f--13fb--53c3--906d--34f9f68040ad-osd--block--b14ae20f--13fb--53c3--906d--34f9f68040ad'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LNOzDv-mwbK-977r-ZNn2-BUfK-Ojs2-FTfoOq', 'scsi-0QEMU_QEMU_HARDDISK_41d88fd2-4f90-4be6-b9c2-0d02d8e1d9f7', 'scsi-SQEMU_QEMU_HARDDISK_41d88fd2-4f90-4be6-b9c2-0d02d8e1d9f7'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:38:13.221797 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--22852bcc--228b--503b--9f2d--d63325c20b67-osd--block--22852bcc--228b--503b--9f2d--d63325c20b67', 'dm-uuid-LVM-2vpr9dH9gZeY8gSil9erCIBYNaxeCzrZ1IWCpJEufaeYaIMw4MYdPiXqm21TwVYW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221803 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fc7bdc9b--bbf6--5512--af7e--0ab125570579-osd--block--fc7bdc9b--bbf6--5512--af7e--0ab125570579', 'dm-uuid-LVM-WUctxpClN6jUduZp73Iv5SahlscRQAQldk84TPtfzN2yibAyIlveTWNy7oqd97jt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221812 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221818 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221825 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37cfb3af-bf99-4b3f-874b-d71467a37a95', 'scsi-SQEMU_QEMU_HARDDISK_37cfb3af-bf99-4b3f-874b-d71467a37a95'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:38:13.221834 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221840 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221845 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221854 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221859 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221865 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221874 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-14-01-42-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:38:13.221885 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb55ef0b-86dc-4261-8469-da65bd85098d', 'scsi-SQEMU_QEMU_HARDDISK_fb55ef0b-86dc-4261-8469-da65bd85098d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb55ef0b-86dc-4261-8469-da65bd85098d-part1', 'scsi-SQEMU_QEMU_HARDDISK_fb55ef0b-86dc-4261-8469-da65bd85098d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb55ef0b-86dc-4261-8469-da65bd85098d-part14', 'scsi-SQEMU_QEMU_HARDDISK_fb55ef0b-86dc-4261-8469-da65bd85098d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb55ef0b-86dc-4261-8469-da65bd85098d-part15', 'scsi-SQEMU_QEMU_HARDDISK_fb55ef0b-86dc-4261-8469-da65bd85098d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb55ef0b-86dc-4261-8469-da65bd85098d-part16', 'scsi-SQEMU_QEMU_HARDDISK_fb55ef0b-86dc-4261-8469-da65bd85098d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:38:13.221894 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--22852bcc--228b--503b--9f2d--d63325c20b67-osd--block--22852bcc--228b--503b--9f2d--d63325c20b67'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-V764Hq-8X16-LNUu-Hl8y-SGFt-xULo-iyh3tC', 'scsi-0QEMU_QEMU_HARDDISK_5f54ee85-b545-45a6-a856-bcb5a8b0ac61', 'scsi-SQEMU_QEMU_HARDDISK_5f54ee85-b545-45a6-a856-bcb5a8b0ac61'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:38:13.221900 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.221906 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--fc7bdc9b--bbf6--5512--af7e--0ab125570579-osd--block--fc7bdc9b--bbf6--5512--af7e--0ab125570579'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-71E2E4-aKB8-J58C-zoT5-3Xr5-ce13-EO3DMe', 'scsi-0QEMU_QEMU_HARDDISK_7ac274fd-1a92-402b-b855-ca6b0ab20cf2', 'scsi-SQEMU_QEMU_HARDDISK_7ac274fd-1a92-402b-b855-ca6b0ab20cf2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:38:13.221915 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d2bee4e-0e3b-437e-a6d5-c0ab15229884', 'scsi-SQEMU_QEMU_HARDDISK_1d2bee4e-0e3b-437e-a6d5-c0ab15229884'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:38:13.221921 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-14-01-42-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:38:13.221927 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.221936 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4aa0a295--50da--5a6e--9e1c--976797741e16-osd--block--4aa0a295--50da--5a6e--9e1c--976797741e16', 'dm-uuid-LVM-kCdz1cde0pMwsO7F2lzKVC2J1lH2SAPakSAWtUoVyWeUARAfubkYbGmRdDUaBMdb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221942 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--19540cc4--3279--5090--817a--02eeffb19a16-osd--block--19540cc4--3279--5090--817a--02eeffb19a16', 'dm-uuid-LVM-1KXQdUGVl8VfBLlgnGCchBpMSZqt1xNdXGspfLy96JIM1P11e7FnyOlDaxnhF5Xr'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221947 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221956 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221961 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221973 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221978 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221984 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221989 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.221998 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:38:13.222007 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2db2252-8503-4549-bea5-ecd40c91a84d', 'scsi-SQEMU_QEMU_HARDDISK_e2db2252-8503-4549-bea5-ecd40c91a84d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2db2252-8503-4549-bea5-ecd40c91a84d-part1', 'scsi-SQEMU_QEMU_HARDDISK_e2db2252-8503-4549-bea5-ecd40c91a84d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2db2252-8503-4549-bea5-ecd40c91a84d-part14', 'scsi-SQEMU_QEMU_HARDDISK_e2db2252-8503-4549-bea5-ecd40c91a84d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2db2252-8503-4549-bea5-ecd40c91a84d-part15', 'scsi-SQEMU_QEMU_HARDDISK_e2db2252-8503-4549-bea5-ecd40c91a84d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2db2252-8503-4549-bea5-ecd40c91a84d-part16', 'scsi-SQEMU_QEMU_HARDDISK_e2db2252-8503-4549-bea5-ecd40c91a84d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:38:13.222059 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4aa0a295--50da--5a6e--9e1c--976797741e16-osd--block--4aa0a295--50da--5a6e--9e1c--976797741e16'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vZEOLk-gz5S-Ejgr-ohmn-qOyq-pndi-D4F9VL', 'scsi-0QEMU_QEMU_HARDDISK_dfedfdfd-f02f-46ee-b152-0d1db465af93', 'scsi-SQEMU_QEMU_HARDDISK_dfedfdfd-f02f-46ee-b152-0d1db465af93'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:38:13.222068 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--19540cc4--3279--5090--817a--02eeffb19a16-osd--block--19540cc4--3279--5090--817a--02eeffb19a16'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0uiX9e-BgTy-IRkz-qU3A-ZzGp-2nz6-mwR7Nh', 'scsi-0QEMU_QEMU_HARDDISK_b728a659-cffd-44e0-b567-754457aa92dd', 'scsi-SQEMU_QEMU_HARDDISK_b728a659-cffd-44e0-b567-754457aa92dd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:38:13.222079 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0315b34d-7399-4bf5-aad0-c6c82dbe1c9e', 'scsi-SQEMU_QEMU_HARDDISK_0315b34d-7399-4bf5-aad0-c6c82dbe1c9e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:38:13.222085 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-14-01-42-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:38:13.222090 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.222096 | orchestrator | 2025-05-14 02:38:13.222101 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-05-14 02:38:13.222106 | orchestrator | Wednesday 14 May 2025 02:25:33 +0000 (0:00:02.287) 0:00:40.285 ********* 2025-05-14 02:38:13.222112 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.222117 | orchestrator | 2025-05-14 02:38:13.222122 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-05-14 02:38:13.222132 | orchestrator | Wednesday 14 May 2025 02:25:33 +0000 (0:00:00.414) 0:00:40.700 ********* 2025-05-14 02:38:13.222137 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.222142 | orchestrator | 2025-05-14 02:38:13.222150 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-05-14 02:38:13.222156 | orchestrator | Wednesday 14 May 2025 02:25:33 +0000 (0:00:00.149) 0:00:40.850 ********* 2025-05-14 02:38:13.222161 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.222167 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.222172 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.222177 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.222182 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.222187 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.222193 | orchestrator | 2025-05-14 02:38:13.222198 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-05-14 02:38:13.222203 | orchestrator | Wednesday 14 May 2025 02:25:34 +0000 (0:00:00.986) 0:00:41.836 ********* 2025-05-14 02:38:13.222208 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.222214 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.222219 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.222224 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.222230 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.222235 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.222240 | orchestrator | 2025-05-14 02:38:13.222245 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-05-14 02:38:13.222250 | orchestrator | Wednesday 14 May 2025 02:25:36 +0000 (0:00:01.859) 0:00:43.696 ********* 2025-05-14 02:38:13.222256 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.222261 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.222266 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.222271 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.222276 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.222282 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.222287 | orchestrator | 2025-05-14 02:38:13.222292 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-14 02:38:13.222297 | orchestrator | Wednesday 14 May 2025 02:25:37 +0000 (0:00:00.727) 0:00:44.423 ********* 2025-05-14 02:38:13.222303 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.222308 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.222314 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.222321 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.222327 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.222333 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.222339 | orchestrator | 2025-05-14 02:38:13.222345 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-14 02:38:13.222351 | orchestrator | Wednesday 14 May 2025 02:25:38 +0000 (0:00:00.879) 0:00:45.303 ********* 2025-05-14 02:38:13.222357 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.222413 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.222419 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.222425 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.222431 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.222436 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.222465 | orchestrator | 2025-05-14 02:38:13.222471 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-14 02:38:13.222477 | orchestrator | Wednesday 14 May 2025 02:25:38 +0000 (0:00:00.923) 0:00:46.226 ********* 2025-05-14 02:38:13.222483 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.222489 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.222495 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.222500 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.222506 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.222512 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.222518 | orchestrator | 2025-05-14 02:38:13.222524 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-14 02:38:13.222533 | orchestrator | Wednesday 14 May 2025 02:25:40 +0000 (0:00:01.398) 0:00:47.625 ********* 2025-05-14 02:38:13.222539 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.222545 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.222551 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.222556 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.222562 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.222568 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.222574 | orchestrator | 2025-05-14 02:38:13.222580 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-05-14 02:38:13.222637 | orchestrator | Wednesday 14 May 2025 02:25:41 +0000 (0:00:01.107) 0:00:48.732 ********* 2025-05-14 02:38:13.222648 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-14 02:38:13.222831 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-14 02:38:13.222840 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-14 02:38:13.222845 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-14 02:38:13.222850 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-14 02:38:13.222855 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-14 02:38:13.222860 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.222865 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-14 02:38:13.222870 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-14 02:38:13.222875 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-14 02:38:13.222880 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.222885 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-14 02:38:13.222890 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.222895 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-14 02:38:13.222900 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-14 02:38:13.222905 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-14 02:38:13.222910 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-14 02:38:13.222915 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.222920 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-14 02:38:13.222924 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-14 02:38:13.222929 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.222939 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-14 02:38:13.222945 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-14 02:38:13.222950 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.222955 | orchestrator | 2025-05-14 02:38:13.222960 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-05-14 02:38:13.222965 | orchestrator | Wednesday 14 May 2025 02:25:44 +0000 (0:00:02.948) 0:00:51.681 ********* 2025-05-14 02:38:13.222970 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-14 02:38:13.222975 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-14 02:38:13.222980 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-14 02:38:13.222985 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-14 02:38:13.222990 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-14 02:38:13.222994 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-14 02:38:13.222999 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-14 02:38:13.223004 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.223009 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-14 02:38:13.223014 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-14 02:38:13.223019 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-14 02:38:13.223030 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.223035 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-14 02:38:13.223040 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-14 02:38:13.223044 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-14 02:38:13.223049 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-14 02:38:13.223054 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-14 02:38:13.223059 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.223064 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-14 02:38:13.223068 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.223073 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-14 02:38:13.223078 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.223083 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-14 02:38:13.223088 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.223093 | orchestrator | 2025-05-14 02:38:13.223098 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-05-14 02:38:13.223104 | orchestrator | Wednesday 14 May 2025 02:25:46 +0000 (0:00:01.831) 0:00:53.512 ********* 2025-05-14 02:38:13.223109 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 02:38:13.223115 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-05-14 02:38:13.223121 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-14 02:38:13.223126 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-05-14 02:38:13.223132 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-05-14 02:38:13.223163 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-14 02:38:13.223169 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-05-14 02:38:13.223175 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-05-14 02:38:13.223181 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-05-14 02:38:13.223187 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-05-14 02:38:13.223193 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-05-14 02:38:13.223199 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-05-14 02:38:13.223205 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-05-14 02:38:13.223211 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-05-14 02:38:13.223217 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-05-14 02:38:13.223222 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-05-14 02:38:13.223228 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-05-14 02:38:13.223234 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-05-14 02:38:13.223240 | orchestrator | 2025-05-14 02:38:13.223246 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-05-14 02:38:13.223257 | orchestrator | Wednesday 14 May 2025 02:25:51 +0000 (0:00:05.418) 0:00:58.931 ********* 2025-05-14 02:38:13.223263 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-14 02:38:13.223269 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-14 02:38:13.223275 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-14 02:38:13.223281 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-14 02:38:13.223287 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-14 02:38:13.223292 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-14 02:38:13.223298 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.223304 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-14 02:38:13.223310 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-14 02:38:13.223316 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-14 02:38:13.223322 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.223334 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-14 02:38:13.223340 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-14 02:38:13.223346 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.223352 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-14 02:38:13.223358 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-14 02:38:13.223364 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.223370 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-14 02:38:13.223379 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-14 02:38:13.223385 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.223391 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-14 02:38:13.223397 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-14 02:38:13.223403 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-14 02:38:13.223409 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.223415 | orchestrator | 2025-05-14 02:38:13.223421 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-05-14 02:38:13.223427 | orchestrator | Wednesday 14 May 2025 02:25:53 +0000 (0:00:01.991) 0:01:00.923 ********* 2025-05-14 02:38:13.223433 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-14 02:38:13.223438 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-14 02:38:13.223444 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-14 02:38:13.223451 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.223458 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-14 02:38:13.223465 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-14 02:38:13.223471 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-14 02:38:13.223478 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-14 02:38:13.223485 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-14 02:38:13.223492 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.223499 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-14 02:38:13.223507 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-14 02:38:13.223518 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-14 02:38:13.223527 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-14 02:38:13.223652 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.223665 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.223675 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-14 02:38:13.223685 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-14 02:38:13.223695 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-14 02:38:13.223704 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.223713 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-14 02:38:13.223722 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-14 02:38:13.223727 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-14 02:38:13.223733 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.223739 | orchestrator | 2025-05-14 02:38:13.223745 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-05-14 02:38:13.223750 | orchestrator | Wednesday 14 May 2025 02:25:55 +0000 (0:00:01.372) 0:01:02.295 ********* 2025-05-14 02:38:13.223757 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-05-14 02:38:13.223763 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-14 02:38:13.223769 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-14 02:38:13.223781 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-14 02:38:13.223787 | orchestrator | ok: [testbed-node-1] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'}) 2025-05-14 02:38:13.223792 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-14 02:38:13.223798 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-14 02:38:13.223804 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-14 02:38:13.223810 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-14 02:38:13.223821 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-14 02:38:13.223827 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-14 02:38:13.223833 | orchestrator | ok: [testbed-node-2] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'}) 2025-05-14 02:38:13.223838 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.223844 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-14 02:38:13.223850 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-14 02:38:13.223855 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-14 02:38:13.223861 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.223867 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-14 02:38:13.223872 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-14 02:38:13.223878 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-14 02:38:13.223884 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.223889 | orchestrator | 2025-05-14 02:38:13.223895 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-05-14 02:38:13.223901 | orchestrator | Wednesday 14 May 2025 02:25:56 +0000 (0:00:01.392) 0:01:03.688 ********* 2025-05-14 02:38:13.223907 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.223913 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.223919 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.223924 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:38:13.223930 | orchestrator | 2025-05-14 02:38:13.223936 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-14 02:38:13.223942 | orchestrator | Wednesday 14 May 2025 02:25:57 +0000 (0:00:01.219) 0:01:04.908 ********* 2025-05-14 02:38:13.223948 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.223954 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.223959 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.223965 | orchestrator | 2025-05-14 02:38:13.223971 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-14 02:38:13.223976 | orchestrator | Wednesday 14 May 2025 02:25:58 +0000 (0:00:00.622) 0:01:05.531 ********* 2025-05-14 02:38:13.223982 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.223988 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.223994 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.223999 | orchestrator | 2025-05-14 02:38:13.224005 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-14 02:38:13.224011 | orchestrator | Wednesday 14 May 2025 02:25:59 +0000 (0:00:00.783) 0:01:06.315 ********* 2025-05-14 02:38:13.224016 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.224022 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.224032 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.224038 | orchestrator | 2025-05-14 02:38:13.224043 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-14 02:38:13.224049 | orchestrator | Wednesday 14 May 2025 02:25:59 +0000 (0:00:00.892) 0:01:07.207 ********* 2025-05-14 02:38:13.224055 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.224061 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.224067 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.224082 | orchestrator | 2025-05-14 02:38:13.224088 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-14 02:38:13.224094 | orchestrator | Wednesday 14 May 2025 02:26:00 +0000 (0:00:00.838) 0:01:08.046 ********* 2025-05-14 02:38:13.224103 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:38:13.224113 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:38:13.224122 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:38:13.224141 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.224151 | orchestrator | 2025-05-14 02:38:13.224160 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-14 02:38:13.224168 | orchestrator | Wednesday 14 May 2025 02:26:01 +0000 (0:00:00.740) 0:01:08.786 ********* 2025-05-14 02:38:13.224178 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:38:13.224188 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:38:13.224198 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:38:13.224208 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.224237 | orchestrator | 2025-05-14 02:38:13.224244 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-14 02:38:13.224250 | orchestrator | Wednesday 14 May 2025 02:26:02 +0000 (0:00:00.775) 0:01:09.562 ********* 2025-05-14 02:38:13.224255 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:38:13.224261 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:38:13.224267 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:38:13.224272 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.224278 | orchestrator | 2025-05-14 02:38:13.224300 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 02:38:13.224305 | orchestrator | Wednesday 14 May 2025 02:26:03 +0000 (0:00:01.534) 0:01:11.096 ********* 2025-05-14 02:38:13.224311 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.224380 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.224398 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.224404 | orchestrator | 2025-05-14 02:38:13.224410 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-14 02:38:13.224422 | orchestrator | Wednesday 14 May 2025 02:26:04 +0000 (0:00:00.733) 0:01:11.830 ********* 2025-05-14 02:38:13.224428 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-14 02:38:13.224434 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-14 02:38:13.224440 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-14 02:38:13.224445 | orchestrator | 2025-05-14 02:38:13.224451 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-14 02:38:13.224457 | orchestrator | Wednesday 14 May 2025 02:26:06 +0000 (0:00:01.771) 0:01:13.601 ********* 2025-05-14 02:38:13.224463 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.224469 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.224474 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.224480 | orchestrator | 2025-05-14 02:38:13.224486 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 02:38:13.224492 | orchestrator | Wednesday 14 May 2025 02:26:07 +0000 (0:00:00.802) 0:01:14.403 ********* 2025-05-14 02:38:13.224497 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.224503 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.224509 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.224520 | orchestrator | 2025-05-14 02:38:13.224526 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-14 02:38:13.224531 | orchestrator | Wednesday 14 May 2025 02:26:08 +0000 (0:00:01.313) 0:01:15.717 ********* 2025-05-14 02:38:13.224537 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-14 02:38:13.224543 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.224548 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-14 02:38:13.224554 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.224560 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-14 02:38:13.224565 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.224571 | orchestrator | 2025-05-14 02:38:13.224579 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-14 02:38:13.224629 | orchestrator | Wednesday 14 May 2025 02:26:09 +0000 (0:00:01.340) 0:01:17.057 ********* 2025-05-14 02:38:13.224638 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-14 02:38:13.224644 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.224650 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-14 02:38:13.224656 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.224661 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-14 02:38:13.224667 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.224672 | orchestrator | 2025-05-14 02:38:13.224678 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-14 02:38:13.224684 | orchestrator | Wednesday 14 May 2025 02:26:11 +0000 (0:00:01.348) 0:01:18.405 ********* 2025-05-14 02:38:13.224689 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:38:13.224695 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-14 02:38:13.224701 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:38:13.224706 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:38:13.224712 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.224717 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-14 02:38:13.224722 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-14 02:38:13.224728 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-14 02:38:13.224733 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-14 02:38:13.224739 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.224745 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-14 02:38:13.224750 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.224756 | orchestrator | 2025-05-14 02:38:13.224762 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-05-14 02:38:13.224768 | orchestrator | Wednesday 14 May 2025 02:26:12 +0000 (0:00:01.457) 0:01:19.863 ********* 2025-05-14 02:38:13.224773 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.224779 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.224784 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.224790 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.224795 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.224801 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.224806 | orchestrator | 2025-05-14 02:38:13.224812 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-05-14 02:38:13.224818 | orchestrator | Wednesday 14 May 2025 02:26:13 +0000 (0:00:01.096) 0:01:20.960 ********* 2025-05-14 02:38:13.224823 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 02:38:13.224829 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-14 02:38:13.224835 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-14 02:38:13.224845 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-14 02:38:13.224851 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-14 02:38:13.224856 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-14 02:38:13.224862 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-14 02:38:13.224867 | orchestrator | 2025-05-14 02:38:13.224873 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-05-14 02:38:13.224879 | orchestrator | Wednesday 14 May 2025 02:26:14 +0000 (0:00:01.055) 0:01:22.016 ********* 2025-05-14 02:38:13.224884 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 02:38:13.224894 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-14 02:38:13.224899 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-14 02:38:13.224905 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-14 02:38:13.224910 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-14 02:38:13.224916 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-14 02:38:13.224921 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-14 02:38:13.224927 | orchestrator | 2025-05-14 02:38:13.224933 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-14 02:38:13.224938 | orchestrator | Wednesday 14 May 2025 02:26:17 +0000 (0:00:02.439) 0:01:24.455 ********* 2025-05-14 02:38:13.224944 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:38:13.224951 | orchestrator | 2025-05-14 02:38:13.224957 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-14 02:38:13.224962 | orchestrator | Wednesday 14 May 2025 02:26:19 +0000 (0:00:02.225) 0:01:26.680 ********* 2025-05-14 02:38:13.224968 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.224974 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.224979 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.224988 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.224994 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.225000 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.225005 | orchestrator | 2025-05-14 02:38:13.225011 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-14 02:38:13.225017 | orchestrator | Wednesday 14 May 2025 02:26:20 +0000 (0:00:01.411) 0:01:28.092 ********* 2025-05-14 02:38:13.225022 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.225028 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.225034 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.225039 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.225045 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.225051 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.225056 | orchestrator | 2025-05-14 02:38:13.225062 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-14 02:38:13.225068 | orchestrator | Wednesday 14 May 2025 02:26:22 +0000 (0:00:01.808) 0:01:29.900 ********* 2025-05-14 02:38:13.225073 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.225079 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.225084 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.225090 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.225096 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.225101 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.225107 | orchestrator | 2025-05-14 02:38:13.225113 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-14 02:38:13.225134 | orchestrator | Wednesday 14 May 2025 02:26:24 +0000 (0:00:01.698) 0:01:31.599 ********* 2025-05-14 02:38:13.225139 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.225145 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.225151 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.225156 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.225162 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.225168 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.225173 | orchestrator | 2025-05-14 02:38:13.225179 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-14 02:38:13.225185 | orchestrator | Wednesday 14 May 2025 02:26:25 +0000 (0:00:01.173) 0:01:32.773 ********* 2025-05-14 02:38:13.225190 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.225196 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.225202 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.225207 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.225213 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.225219 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.225224 | orchestrator | 2025-05-14 02:38:13.225230 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-14 02:38:13.225236 | orchestrator | Wednesday 14 May 2025 02:26:27 +0000 (0:00:01.861) 0:01:34.635 ********* 2025-05-14 02:38:13.225241 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.225247 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.225253 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.225258 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.225264 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.225270 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.225275 | orchestrator | 2025-05-14 02:38:13.225281 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-14 02:38:13.225287 | orchestrator | Wednesday 14 May 2025 02:26:28 +0000 (0:00:00.729) 0:01:35.364 ********* 2025-05-14 02:38:13.225292 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.225298 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.225303 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.225309 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.225315 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.225320 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.225326 | orchestrator | 2025-05-14 02:38:13.225332 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-14 02:38:13.225337 | orchestrator | Wednesday 14 May 2025 02:26:29 +0000 (0:00:00.909) 0:01:36.274 ********* 2025-05-14 02:38:13.225343 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.225349 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.225355 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.225360 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.225366 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.225371 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.225377 | orchestrator | 2025-05-14 02:38:13.225383 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-14 02:38:13.225388 | orchestrator | Wednesday 14 May 2025 02:26:29 +0000 (0:00:00.597) 0:01:36.871 ********* 2025-05-14 02:38:13.225398 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.225403 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.225409 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.225415 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.225420 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.225426 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.225431 | orchestrator | 2025-05-14 02:38:13.225437 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-14 02:38:13.225443 | orchestrator | Wednesday 14 May 2025 02:26:30 +0000 (0:00:00.908) 0:01:37.780 ********* 2025-05-14 02:38:13.225448 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.225454 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.225463 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.225469 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.225475 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.225480 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.225486 | orchestrator | 2025-05-14 02:38:13.225491 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-14 02:38:13.225497 | orchestrator | Wednesday 14 May 2025 02:26:31 +0000 (0:00:00.643) 0:01:38.423 ********* 2025-05-14 02:38:13.225503 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.225509 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.225514 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.225520 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.225526 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.225531 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.225537 | orchestrator | 2025-05-14 02:38:13.225543 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-14 02:38:13.225549 | orchestrator | Wednesday 14 May 2025 02:26:32 +0000 (0:00:01.323) 0:01:39.747 ********* 2025-05-14 02:38:13.225554 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.225563 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.225569 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.225574 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.225580 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.225601 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.225611 | orchestrator | 2025-05-14 02:38:13.225621 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-14 02:38:13.225632 | orchestrator | Wednesday 14 May 2025 02:26:33 +0000 (0:00:00.704) 0:01:40.452 ********* 2025-05-14 02:38:13.225641 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.225651 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.225658 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.225664 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.225669 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.225675 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.225680 | orchestrator | 2025-05-14 02:38:13.225686 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-14 02:38:13.225691 | orchestrator | Wednesday 14 May 2025 02:26:34 +0000 (0:00:00.930) 0:01:41.382 ********* 2025-05-14 02:38:13.225697 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.225703 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.225708 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.225714 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.225720 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.225725 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.225731 | orchestrator | 2025-05-14 02:38:13.225737 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-14 02:38:13.225742 | orchestrator | Wednesday 14 May 2025 02:26:34 +0000 (0:00:00.692) 0:01:42.074 ********* 2025-05-14 02:38:13.225748 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.225753 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.225759 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.225764 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.225770 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.225776 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.225781 | orchestrator | 2025-05-14 02:38:13.225787 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-14 02:38:13.225793 | orchestrator | Wednesday 14 May 2025 02:26:35 +0000 (0:00:01.099) 0:01:43.173 ********* 2025-05-14 02:38:13.225798 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.225804 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.225810 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.225815 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.225821 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.225827 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.225837 | orchestrator | 2025-05-14 02:38:13.225843 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-14 02:38:13.225849 | orchestrator | Wednesday 14 May 2025 02:26:36 +0000 (0:00:00.559) 0:01:43.733 ********* 2025-05-14 02:38:13.225854 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.225860 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.225865 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.225871 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.225877 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.225882 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.225888 | orchestrator | 2025-05-14 02:38:13.225894 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-14 02:38:13.225899 | orchestrator | Wednesday 14 May 2025 02:26:37 +0000 (0:00:00.674) 0:01:44.408 ********* 2025-05-14 02:38:13.225905 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.225910 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.225916 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.225922 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.225927 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.225933 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.225939 | orchestrator | 2025-05-14 02:38:13.225944 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-14 02:38:13.225950 | orchestrator | Wednesday 14 May 2025 02:26:37 +0000 (0:00:00.537) 0:01:44.946 ********* 2025-05-14 02:38:13.225956 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.225961 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.225967 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.225973 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.225978 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.225984 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.225989 | orchestrator | 2025-05-14 02:38:13.225997 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-14 02:38:13.226011 | orchestrator | Wednesday 14 May 2025 02:26:38 +0000 (0:00:00.670) 0:01:45.616 ********* 2025-05-14 02:38:13.226075 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.226086 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.226097 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.226106 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.226116 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.226122 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.226128 | orchestrator | 2025-05-14 02:38:13.226134 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-14 02:38:13.226140 | orchestrator | Wednesday 14 May 2025 02:26:38 +0000 (0:00:00.578) 0:01:46.195 ********* 2025-05-14 02:38:13.226146 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.226152 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.226157 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.226163 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.226168 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.226174 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.226180 | orchestrator | 2025-05-14 02:38:13.226185 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-14 02:38:13.226191 | orchestrator | Wednesday 14 May 2025 02:26:39 +0000 (0:00:00.739) 0:01:46.934 ********* 2025-05-14 02:38:13.226197 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.226203 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.226209 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.226215 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.226221 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.226227 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.226233 | orchestrator | 2025-05-14 02:38:13.226239 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-14 02:38:13.226249 | orchestrator | Wednesday 14 May 2025 02:26:40 +0000 (0:00:00.635) 0:01:47.570 ********* 2025-05-14 02:38:13.226261 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.226267 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.226273 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.226279 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.226285 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.226291 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.226297 | orchestrator | 2025-05-14 02:38:13.226303 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-14 02:38:13.226309 | orchestrator | Wednesday 14 May 2025 02:26:41 +0000 (0:00:00.960) 0:01:48.531 ********* 2025-05-14 02:38:13.226315 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.226321 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.226327 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.226333 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.226339 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.226345 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.226351 | orchestrator | 2025-05-14 02:38:13.226357 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-14 02:38:13.226363 | orchestrator | Wednesday 14 May 2025 02:26:41 +0000 (0:00:00.562) 0:01:49.093 ********* 2025-05-14 02:38:13.226369 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.226375 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.226381 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.226387 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.226393 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.226399 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.226405 | orchestrator | 2025-05-14 02:38:13.226411 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-14 02:38:13.226417 | orchestrator | Wednesday 14 May 2025 02:26:42 +0000 (0:00:00.753) 0:01:49.847 ********* 2025-05-14 02:38:13.226423 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.226429 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.226435 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.226441 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.226447 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.226453 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.226459 | orchestrator | 2025-05-14 02:38:13.226465 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-14 02:38:13.226471 | orchestrator | Wednesday 14 May 2025 02:26:43 +0000 (0:00:00.729) 0:01:50.576 ********* 2025-05-14 02:38:13.226477 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.226483 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.226489 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.226495 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.226501 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.226507 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.226513 | orchestrator | 2025-05-14 02:38:13.226519 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-14 02:38:13.226525 | orchestrator | Wednesday 14 May 2025 02:26:44 +0000 (0:00:01.006) 0:01:51.583 ********* 2025-05-14 02:38:13.226531 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.226537 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.226543 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.226550 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.226556 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.226561 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.226568 | orchestrator | 2025-05-14 02:38:13.226574 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-14 02:38:13.226580 | orchestrator | Wednesday 14 May 2025 02:26:44 +0000 (0:00:00.586) 0:01:52.170 ********* 2025-05-14 02:38:13.226601 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.226611 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.226622 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.226628 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.226634 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.226640 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.226646 | orchestrator | 2025-05-14 02:38:13.226652 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-14 02:38:13.226658 | orchestrator | Wednesday 14 May 2025 02:26:45 +0000 (0:00:00.906) 0:01:53.076 ********* 2025-05-14 02:38:13.226664 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.226671 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.226700 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.226706 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.226712 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.226718 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.226724 | orchestrator | 2025-05-14 02:38:13.226730 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-14 02:38:13.226736 | orchestrator | Wednesday 14 May 2025 02:26:46 +0000 (0:00:00.644) 0:01:53.720 ********* 2025-05-14 02:38:13.226742 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.226748 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.226754 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.226760 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.226766 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.226772 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.226778 | orchestrator | 2025-05-14 02:38:13.226784 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-14 02:38:13.226790 | orchestrator | Wednesday 14 May 2025 02:26:47 +0000 (0:00:00.961) 0:01:54.682 ********* 2025-05-14 02:38:13.226796 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.226803 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.226809 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.226815 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.226821 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.226826 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.226832 | orchestrator | 2025-05-14 02:38:13.226838 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-14 02:38:13.226844 | orchestrator | Wednesday 14 May 2025 02:26:48 +0000 (0:00:00.753) 0:01:55.436 ********* 2025-05-14 02:38:13.226850 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-14 02:38:13.226860 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-14 02:38:13.226866 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.226872 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-14 02:38:13.226878 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-14 02:38:13.226884 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.226890 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-14 02:38:13.226896 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-14 02:38:13.226902 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.226907 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-14 02:38:13.226913 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-14 02:38:13.226919 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-14 02:38:13.226950 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-14 02:38:13.226959 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.226970 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.226993 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-14 02:38:13.227023 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-14 02:38:13.227068 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.227079 | orchestrator | 2025-05-14 02:38:13.227089 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-14 02:38:13.227099 | orchestrator | Wednesday 14 May 2025 02:26:49 +0000 (0:00:00.995) 0:01:56.431 ********* 2025-05-14 02:38:13.227116 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-14 02:38:13.227127 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-14 02:38:13.227137 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.227146 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-14 02:38:13.227157 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-14 02:38:13.227167 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.227177 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-14 02:38:13.227188 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-14 02:38:13.227198 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.227208 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-14 02:38:13.227217 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-14 02:38:13.227228 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.227238 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-14 02:38:13.227248 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-14 02:38:13.227258 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.227279 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-14 02:38:13.227289 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-14 02:38:13.227298 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.227308 | orchestrator | 2025-05-14 02:38:13.227319 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-14 02:38:13.227330 | orchestrator | Wednesday 14 May 2025 02:26:50 +0000 (0:00:00.922) 0:01:57.354 ********* 2025-05-14 02:38:13.227341 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.227351 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.227361 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.227371 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.227380 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.227386 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.227392 | orchestrator | 2025-05-14 02:38:13.227398 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-14 02:38:13.227404 | orchestrator | Wednesday 14 May 2025 02:26:51 +0000 (0:00:01.120) 0:01:58.474 ********* 2025-05-14 02:38:13.227410 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.227416 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.227422 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.227428 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.227434 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.227440 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.227446 | orchestrator | 2025-05-14 02:38:13.227452 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-14 02:38:13.227465 | orchestrator | Wednesday 14 May 2025 02:26:51 +0000 (0:00:00.750) 0:01:59.225 ********* 2025-05-14 02:38:13.227471 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.227478 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.227483 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.227489 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.227495 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.227501 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.227507 | orchestrator | 2025-05-14 02:38:13.227513 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-14 02:38:13.227520 | orchestrator | Wednesday 14 May 2025 02:26:53 +0000 (0:00:01.098) 0:02:00.323 ********* 2025-05-14 02:38:13.227526 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.227532 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.227538 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.227554 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.227561 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.227566 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.227572 | orchestrator | 2025-05-14 02:38:13.227578 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-14 02:38:13.227584 | orchestrator | Wednesday 14 May 2025 02:26:53 +0000 (0:00:00.709) 0:02:01.033 ********* 2025-05-14 02:38:13.227645 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.227656 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.227665 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.227675 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.227686 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.227696 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.227706 | orchestrator | 2025-05-14 02:38:13.227716 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-14 02:38:13.227732 | orchestrator | Wednesday 14 May 2025 02:26:54 +0000 (0:00:00.838) 0:02:01.871 ********* 2025-05-14 02:38:13.227743 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.227753 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.227763 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.227773 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.227782 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.227792 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.227801 | orchestrator | 2025-05-14 02:38:13.227810 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-14 02:38:13.227820 | orchestrator | Wednesday 14 May 2025 02:26:55 +0000 (0:00:00.575) 0:02:02.446 ********* 2025-05-14 02:38:13.227829 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 02:38:13.227839 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 02:38:13.227849 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 02:38:13.227858 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.227868 | orchestrator | 2025-05-14 02:38:13.227877 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-14 02:38:13.227887 | orchestrator | Wednesday 14 May 2025 02:26:55 +0000 (0:00:00.721) 0:02:03.168 ********* 2025-05-14 02:38:13.227896 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 02:38:13.227906 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 02:38:13.227915 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 02:38:13.227925 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.227934 | orchestrator | 2025-05-14 02:38:13.227944 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-14 02:38:13.227953 | orchestrator | Wednesday 14 May 2025 02:26:56 +0000 (0:00:00.370) 0:02:03.538 ********* 2025-05-14 02:38:13.227963 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 02:38:13.227972 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 02:38:13.227982 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 02:38:13.227991 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.228000 | orchestrator | 2025-05-14 02:38:13.228010 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 02:38:13.228019 | orchestrator | Wednesday 14 May 2025 02:26:56 +0000 (0:00:00.369) 0:02:03.908 ********* 2025-05-14 02:38:13.228029 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.228039 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.228048 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.228057 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.228067 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.228077 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.228086 | orchestrator | 2025-05-14 02:38:13.228095 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-14 02:38:13.228105 | orchestrator | Wednesday 14 May 2025 02:26:57 +0000 (0:00:00.572) 0:02:04.481 ********* 2025-05-14 02:38:13.228122 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-14 02:38:13.228132 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.228140 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-14 02:38:13.228151 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.228160 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-14 02:38:13.228169 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.228179 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-14 02:38:13.228189 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.228198 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-14 02:38:13.228208 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.228218 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-14 02:38:13.228227 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.228237 | orchestrator | 2025-05-14 02:38:13.228247 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-14 02:38:13.228257 | orchestrator | Wednesday 14 May 2025 02:26:58 +0000 (0:00:00.818) 0:02:05.299 ********* 2025-05-14 02:38:13.228267 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.228278 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.228288 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.228298 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.228308 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.228319 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.228329 | orchestrator | 2025-05-14 02:38:13.228347 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 02:38:13.228356 | orchestrator | Wednesday 14 May 2025 02:26:58 +0000 (0:00:00.561) 0:02:05.860 ********* 2025-05-14 02:38:13.228366 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.228375 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.228385 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.228394 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.228404 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.228415 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.228424 | orchestrator | 2025-05-14 02:38:13.228434 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-14 02:38:13.228444 | orchestrator | Wednesday 14 May 2025 02:26:59 +0000 (0:00:00.872) 0:02:06.733 ********* 2025-05-14 02:38:13.228455 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-14 02:38:13.228465 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.228475 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-14 02:38:13.228485 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.228495 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-14 02:38:13.228505 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.228516 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-14 02:38:13.228526 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.228535 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-14 02:38:13.228545 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.228554 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-14 02:38:13.228564 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.228573 | orchestrator | 2025-05-14 02:38:13.228608 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-14 02:38:13.228623 | orchestrator | Wednesday 14 May 2025 02:27:00 +0000 (0:00:00.782) 0:02:07.515 ********* 2025-05-14 02:38:13.228633 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.228643 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.228652 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.228662 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-14 02:38:13.228672 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.228690 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-14 02:38:13.228701 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.228711 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-14 02:38:13.228722 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.228732 | orchestrator | 2025-05-14 02:38:13.228742 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-14 02:38:13.228751 | orchestrator | Wednesday 14 May 2025 02:27:01 +0000 (0:00:00.810) 0:02:08.326 ********* 2025-05-14 02:38:13.228761 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 02:38:13.228771 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 02:38:13.228780 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 02:38:13.228789 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.228799 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-14 02:38:13.228808 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-14 02:38:13.228818 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-14 02:38:13.228827 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.228836 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-14 02:38:13.228846 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-14 02:38:13.228855 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-14 02:38:13.228865 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.228875 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:38:13.228884 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:38:13.228894 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:38:13.228903 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.228912 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-14 02:38:13.228922 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-14 02:38:13.228931 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-14 02:38:13.228941 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.228950 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-14 02:38:13.228959 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-14 02:38:13.228969 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-14 02:38:13.228978 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.228988 | orchestrator | 2025-05-14 02:38:13.228997 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-14 02:38:13.229006 | orchestrator | Wednesday 14 May 2025 02:27:02 +0000 (0:00:01.331) 0:02:09.658 ********* 2025-05-14 02:38:13.229016 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.229025 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.229035 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.229044 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.229053 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.229063 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.229072 | orchestrator | 2025-05-14 02:38:13.229082 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-14 02:38:13.229091 | orchestrator | Wednesday 14 May 2025 02:27:03 +0000 (0:00:01.044) 0:02:10.702 ********* 2025-05-14 02:38:13.229101 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.229110 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.229126 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.229135 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-14 02:38:13.229145 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.229154 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-14 02:38:13.229172 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.229181 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-14 02:38:13.229191 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.229200 | orchestrator | 2025-05-14 02:38:13.229209 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-14 02:38:13.229219 | orchestrator | Wednesday 14 May 2025 02:27:04 +0000 (0:00:00.988) 0:02:11.690 ********* 2025-05-14 02:38:13.229228 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.229238 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.229247 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.229257 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.229267 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.229277 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.229288 | orchestrator | 2025-05-14 02:38:13.229298 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-14 02:38:13.229307 | orchestrator | Wednesday 14 May 2025 02:27:05 +0000 (0:00:01.043) 0:02:12.734 ********* 2025-05-14 02:38:13.229317 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.229326 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.229336 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.229345 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.229355 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.229365 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.229374 | orchestrator | 2025-05-14 02:38:13.229383 | orchestrator | TASK [ceph-container-common : generate systemd ceph-mon target file] *********** 2025-05-14 02:38:13.229397 | orchestrator | Wednesday 14 May 2025 02:27:06 +0000 (0:00:01.176) 0:02:13.910 ********* 2025-05-14 02:38:13.229407 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:38:13.229418 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:38:13.229428 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:38:13.229438 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:38:13.229448 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:38:13.229459 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:38:13.229469 | orchestrator | 2025-05-14 02:38:13.229480 | orchestrator | TASK [ceph-container-common : enable ceph.target] ****************************** 2025-05-14 02:38:13.229487 | orchestrator | Wednesday 14 May 2025 02:27:08 +0000 (0:00:01.467) 0:02:15.377 ********* 2025-05-14 02:38:13.229493 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:38:13.229499 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:38:13.229505 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:38:13.229511 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:38:13.229517 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:38:13.229522 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:38:13.229528 | orchestrator | 2025-05-14 02:38:13.229534 | orchestrator | TASK [ceph-container-common : include prerequisites.yml] *********************** 2025-05-14 02:38:13.229541 | orchestrator | Wednesday 14 May 2025 02:27:10 +0000 (0:00:02.274) 0:02:17.652 ********* 2025-05-14 02:38:13.229547 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:38:13.229553 | orchestrator | 2025-05-14 02:38:13.229559 | orchestrator | TASK [ceph-container-common : stop lvmetad] ************************************ 2025-05-14 02:38:13.229565 | orchestrator | Wednesday 14 May 2025 02:27:11 +0000 (0:00:01.355) 0:02:19.007 ********* 2025-05-14 02:38:13.229571 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.229577 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.229583 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.229611 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.229617 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.229623 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.229629 | orchestrator | 2025-05-14 02:38:13.229636 | orchestrator | TASK [ceph-container-common : disable and mask lvmetad service] **************** 2025-05-14 02:38:13.229649 | orchestrator | Wednesday 14 May 2025 02:27:12 +0000 (0:00:00.721) 0:02:19.728 ********* 2025-05-14 02:38:13.229655 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.229661 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.229667 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.229673 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.229679 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.229689 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.229699 | orchestrator | 2025-05-14 02:38:13.229710 | orchestrator | TASK [ceph-container-common : remove ceph udev rules] ************************** 2025-05-14 02:38:13.229721 | orchestrator | Wednesday 14 May 2025 02:27:13 +0000 (0:00:01.114) 0:02:20.843 ********* 2025-05-14 02:38:13.229731 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-14 02:38:13.229741 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-14 02:38:13.229750 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-14 02:38:13.229757 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-14 02:38:13.229763 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-14 02:38:13.229769 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-14 02:38:13.229775 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-14 02:38:13.229781 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-14 02:38:13.229787 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-14 02:38:13.229793 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-14 02:38:13.229805 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-14 02:38:13.229811 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-14 02:38:13.229817 | orchestrator | 2025-05-14 02:38:13.229824 | orchestrator | TASK [ceph-container-common : ensure tmpfiles.d is present] ******************** 2025-05-14 02:38:13.229830 | orchestrator | Wednesday 14 May 2025 02:27:15 +0000 (0:00:01.475) 0:02:22.318 ********* 2025-05-14 02:38:13.229836 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:38:13.229842 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:38:13.229848 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:38:13.229854 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:38:13.229860 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:38:13.229866 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:38:13.229872 | orchestrator | 2025-05-14 02:38:13.229878 | orchestrator | TASK [ceph-container-common : restore certificates selinux context] ************ 2025-05-14 02:38:13.229884 | orchestrator | Wednesday 14 May 2025 02:27:16 +0000 (0:00:01.431) 0:02:23.750 ********* 2025-05-14 02:38:13.229890 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.229896 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.229902 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.229908 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.229915 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.229921 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.229927 | orchestrator | 2025-05-14 02:38:13.229933 | orchestrator | TASK [ceph-container-common : include registry.yml] **************************** 2025-05-14 02:38:13.229939 | orchestrator | Wednesday 14 May 2025 02:27:17 +0000 (0:00:00.635) 0:02:24.385 ********* 2025-05-14 02:38:13.229945 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.229951 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.229957 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.229967 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.229973 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.229979 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.229990 | orchestrator | 2025-05-14 02:38:13.229996 | orchestrator | TASK [ceph-container-common : include fetch_image.yml] ************************* 2025-05-14 02:38:13.230002 | orchestrator | Wednesday 14 May 2025 02:27:17 +0000 (0:00:00.848) 0:02:25.234 ********* 2025-05-14 02:38:13.230009 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:38:13.230040 | orchestrator | 2025-05-14 02:38:13.230048 | orchestrator | TASK [ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:17.2.7 image] *** 2025-05-14 02:38:13.230054 | orchestrator | Wednesday 14 May 2025 02:27:19 +0000 (0:00:01.226) 0:02:26.461 ********* 2025-05-14 02:38:13.230060 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.230066 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.230072 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.230078 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.230084 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.230091 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.230098 | orchestrator | 2025-05-14 02:38:13.230108 | orchestrator | TASK [ceph-container-common : pulling alertmanager/prometheus/grafana container images] *** 2025-05-14 02:38:13.230119 | orchestrator | Wednesday 14 May 2025 02:28:06 +0000 (0:00:46.958) 0:03:13.420 ********* 2025-05-14 02:38:13.230129 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-14 02:38:13.230140 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-14 02:38:13.230150 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-14 02:38:13.230159 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.230170 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-14 02:38:13.230180 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-14 02:38:13.230191 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-14 02:38:13.230201 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.230211 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-14 02:38:13.230221 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-14 02:38:13.230231 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-14 02:38:13.230241 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.230251 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-14 02:38:13.230261 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-14 02:38:13.230271 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-14 02:38:13.230282 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.230292 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-14 02:38:13.230301 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-14 02:38:13.230312 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-14 02:38:13.230322 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.230332 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-14 02:38:13.230343 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-14 02:38:13.230352 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-14 02:38:13.230362 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.230372 | orchestrator | 2025-05-14 02:38:13.230382 | orchestrator | TASK [ceph-container-common : pulling node-exporter container image] *********** 2025-05-14 02:38:13.230392 | orchestrator | Wednesday 14 May 2025 02:28:07 +0000 (0:00:00.952) 0:03:14.373 ********* 2025-05-14 02:38:13.230402 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.230427 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.230438 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.230448 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.230458 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.230468 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.230478 | orchestrator | 2025-05-14 02:38:13.230488 | orchestrator | TASK [ceph-container-common : export local ceph dev image] ********************* 2025-05-14 02:38:13.230499 | orchestrator | Wednesday 14 May 2025 02:28:07 +0000 (0:00:00.756) 0:03:15.129 ********* 2025-05-14 02:38:13.230509 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.230519 | orchestrator | 2025-05-14 02:38:13.230529 | orchestrator | TASK [ceph-container-common : copy ceph dev image file] ************************ 2025-05-14 02:38:13.230539 | orchestrator | Wednesday 14 May 2025 02:28:08 +0000 (0:00:00.186) 0:03:15.315 ********* 2025-05-14 02:38:13.230548 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.230558 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.230568 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.230578 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.230603 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.230613 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.230624 | orchestrator | 2025-05-14 02:38:13.230633 | orchestrator | TASK [ceph-container-common : load ceph dev image] ***************************** 2025-05-14 02:38:13.230643 | orchestrator | Wednesday 14 May 2025 02:28:09 +0000 (0:00:01.219) 0:03:16.535 ********* 2025-05-14 02:38:13.230653 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.230663 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.230673 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.230683 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.230694 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.230704 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.230714 | orchestrator | 2025-05-14 02:38:13.230729 | orchestrator | TASK [ceph-container-common : remove tmp ceph dev image file] ****************** 2025-05-14 02:38:13.230740 | orchestrator | Wednesday 14 May 2025 02:28:10 +0000 (0:00:00.903) 0:03:17.439 ********* 2025-05-14 02:38:13.230750 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.230760 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.230770 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.230779 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.230789 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.230800 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.230809 | orchestrator | 2025-05-14 02:38:13.230820 | orchestrator | TASK [ceph-container-common : get ceph version] ******************************** 2025-05-14 02:38:13.230830 | orchestrator | Wednesday 14 May 2025 02:28:11 +0000 (0:00:01.279) 0:03:18.718 ********* 2025-05-14 02:38:13.230839 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.230850 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.230859 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.230869 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.230879 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.230889 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.230899 | orchestrator | 2025-05-14 02:38:13.230909 | orchestrator | TASK [ceph-container-common : set_fact ceph_version ceph_version.stdout.split] *** 2025-05-14 02:38:13.230920 | orchestrator | Wednesday 14 May 2025 02:28:14 +0000 (0:00:02.623) 0:03:21.341 ********* 2025-05-14 02:38:13.230930 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.230940 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.230949 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.230960 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.230970 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.230980 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.230989 | orchestrator | 2025-05-14 02:38:13.231000 | orchestrator | TASK [ceph-container-common : include release.yml] ***************************** 2025-05-14 02:38:13.231010 | orchestrator | Wednesday 14 May 2025 02:28:14 +0000 (0:00:00.747) 0:03:22.088 ********* 2025-05-14 02:38:13.231031 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:38:13.231044 | orchestrator | 2025-05-14 02:38:13.231054 | orchestrator | TASK [ceph-container-common : set_fact ceph_release jewel] ********************* 2025-05-14 02:38:13.231064 | orchestrator | Wednesday 14 May 2025 02:28:15 +0000 (0:00:01.137) 0:03:23.226 ********* 2025-05-14 02:38:13.231074 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.231084 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.231094 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.231105 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.231115 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.231124 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.231134 | orchestrator | 2025-05-14 02:38:13.231145 | orchestrator | TASK [ceph-container-common : set_fact ceph_release kraken] ******************** 2025-05-14 02:38:13.231154 | orchestrator | Wednesday 14 May 2025 02:28:16 +0000 (0:00:00.775) 0:03:24.002 ********* 2025-05-14 02:38:13.231165 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.231175 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.231185 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.231195 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.231205 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.231215 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.231225 | orchestrator | 2025-05-14 02:38:13.231235 | orchestrator | TASK [ceph-container-common : set_fact ceph_release luminous] ****************** 2025-05-14 02:38:13.231245 | orchestrator | Wednesday 14 May 2025 02:28:17 +0000 (0:00:00.559) 0:03:24.561 ********* 2025-05-14 02:38:13.231255 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.231266 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.231276 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.231285 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.231295 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.231305 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.231315 | orchestrator | 2025-05-14 02:38:13.231325 | orchestrator | TASK [ceph-container-common : set_fact ceph_release mimic] ********************* 2025-05-14 02:38:13.231335 | orchestrator | Wednesday 14 May 2025 02:28:18 +0000 (0:00:00.804) 0:03:25.365 ********* 2025-05-14 02:38:13.231345 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.231355 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.231365 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.231375 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.231390 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.231401 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.231411 | orchestrator | 2025-05-14 02:38:13.231421 | orchestrator | TASK [ceph-container-common : set_fact ceph_release nautilus] ****************** 2025-05-14 02:38:13.231430 | orchestrator | Wednesday 14 May 2025 02:28:18 +0000 (0:00:00.706) 0:03:26.072 ********* 2025-05-14 02:38:13.231441 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.231451 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.231460 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.231470 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.231480 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.231490 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.231500 | orchestrator | 2025-05-14 02:38:13.231510 | orchestrator | TASK [ceph-container-common : set_fact ceph_release octopus] ******************* 2025-05-14 02:38:13.231520 | orchestrator | Wednesday 14 May 2025 02:28:19 +0000 (0:00:01.163) 0:03:27.236 ********* 2025-05-14 02:38:13.231530 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.231541 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.231550 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.231560 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.231570 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.231580 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.231639 | orchestrator | 2025-05-14 02:38:13.231650 | orchestrator | TASK [ceph-container-common : set_fact ceph_release pacific] ******************* 2025-05-14 02:38:13.231660 | orchestrator | Wednesday 14 May 2025 02:28:20 +0000 (0:00:00.850) 0:03:28.087 ********* 2025-05-14 02:38:13.231670 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.231680 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.231691 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.231701 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.231715 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.231725 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.231735 | orchestrator | 2025-05-14 02:38:13.231745 | orchestrator | TASK [ceph-container-common : set_fact ceph_release quincy] ******************** 2025-05-14 02:38:13.231755 | orchestrator | Wednesday 14 May 2025 02:28:21 +0000 (0:00:01.007) 0:03:29.095 ********* 2025-05-14 02:38:13.231765 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.231775 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.231786 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.231795 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.231806 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.231816 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.231827 | orchestrator | 2025-05-14 02:38:13.231837 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-14 02:38:13.231848 | orchestrator | Wednesday 14 May 2025 02:28:23 +0000 (0:00:01.458) 0:03:30.554 ********* 2025-05-14 02:38:13.231859 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:38:13.231869 | orchestrator | 2025-05-14 02:38:13.231880 | orchestrator | TASK [ceph-config : create ceph initial directories] *************************** 2025-05-14 02:38:13.231888 | orchestrator | Wednesday 14 May 2025 02:28:24 +0000 (0:00:01.466) 0:03:32.021 ********* 2025-05-14 02:38:13.231894 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-05-14 02:38:13.231900 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-05-14 02:38:13.231906 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-05-14 02:38:13.231912 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-05-14 02:38:13.231918 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-05-14 02:38:13.231924 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-05-14 02:38:13.231930 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-05-14 02:38:13.231936 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-05-14 02:38:13.231942 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-05-14 02:38:13.231948 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-05-14 02:38:13.231954 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-05-14 02:38:13.231960 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-05-14 02:38:13.231966 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-05-14 02:38:13.231972 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-05-14 02:38:13.231978 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-05-14 02:38:13.231984 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-05-14 02:38:13.231990 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-05-14 02:38:13.231996 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-05-14 02:38:13.232002 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-05-14 02:38:13.232008 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-05-14 02:38:13.232014 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-05-14 02:38:13.232020 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-05-14 02:38:13.232026 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-05-14 02:38:13.232042 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-05-14 02:38:13.232048 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-05-14 02:38:13.232054 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-05-14 02:38:13.232060 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-05-14 02:38:13.232066 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-05-14 02:38:13.232072 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-05-14 02:38:13.232078 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-05-14 02:38:13.232084 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-05-14 02:38:13.232096 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-05-14 02:38:13.232102 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-05-14 02:38:13.232108 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-05-14 02:38:13.232114 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-05-14 02:38:13.232120 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-05-14 02:38:13.232126 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-05-14 02:38:13.232132 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-05-14 02:38:13.232138 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-05-14 02:38:13.232144 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-05-14 02:38:13.232150 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-14 02:38:13.232157 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-14 02:38:13.232163 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-05-14 02:38:13.232169 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-05-14 02:38:13.232175 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-14 02:38:13.232180 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-14 02:38:13.232186 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-14 02:38:13.232192 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-14 02:38:13.232202 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-14 02:38:13.232208 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-14 02:38:13.232215 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-14 02:38:13.232221 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-14 02:38:13.232227 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-14 02:38:13.232233 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-14 02:38:13.232239 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-14 02:38:13.232245 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-14 02:38:13.232251 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-14 02:38:13.232257 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-14 02:38:13.232263 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-14 02:38:13.232269 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-14 02:38:13.232275 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-14 02:38:13.232281 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-14 02:38:13.232287 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-14 02:38:13.232293 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-14 02:38:13.232304 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-14 02:38:13.232310 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-14 02:38:13.232316 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-14 02:38:13.232322 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-14 02:38:13.232328 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-14 02:38:13.232334 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-14 02:38:13.232340 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-14 02:38:13.232346 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-14 02:38:13.232352 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-14 02:38:13.232358 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-14 02:38:13.232364 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-14 02:38:13.232370 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-05-14 02:38:13.232376 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-14 02:38:13.232382 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-14 02:38:13.232388 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-14 02:38:13.232394 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-05-14 02:38:13.232400 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-05-14 02:38:13.232406 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-05-14 02:38:13.232412 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-05-14 02:38:13.232418 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-05-14 02:38:13.232424 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-05-14 02:38:13.232430 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-05-14 02:38:13.232436 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-05-14 02:38:13.232442 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-05-14 02:38:13.232448 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-05-14 02:38:13.232458 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-05-14 02:38:13.232464 | orchestrator | 2025-05-14 02:38:13.232470 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-14 02:38:13.232476 | orchestrator | Wednesday 14 May 2025 02:28:30 +0000 (0:00:06.165) 0:03:38.186 ********* 2025-05-14 02:38:13.232483 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.232489 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.232495 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.232501 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:38:13.232507 | orchestrator | 2025-05-14 02:38:13.232513 | orchestrator | TASK [ceph-config : create rados gateway instance directories] ***************** 2025-05-14 02:38:13.232520 | orchestrator | Wednesday 14 May 2025 02:28:32 +0000 (0:00:01.247) 0:03:39.434 ********* 2025-05-14 02:38:13.232526 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-14 02:38:13.232532 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-14 02:38:13.232538 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-14 02:38:13.232545 | orchestrator | 2025-05-14 02:38:13.232551 | orchestrator | TASK [ceph-config : generate environment file] ********************************* 2025-05-14 02:38:13.232565 | orchestrator | Wednesday 14 May 2025 02:28:33 +0000 (0:00:01.304) 0:03:40.739 ********* 2025-05-14 02:38:13.232571 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-14 02:38:13.232578 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-14 02:38:13.232584 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-14 02:38:13.232610 | orchestrator | 2025-05-14 02:38:13.232617 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-14 02:38:13.232623 | orchestrator | Wednesday 14 May 2025 02:28:34 +0000 (0:00:01.209) 0:03:41.948 ********* 2025-05-14 02:38:13.232629 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.232635 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.232641 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.232648 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.232654 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.232660 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.232666 | orchestrator | 2025-05-14 02:38:13.232672 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-14 02:38:13.232678 | orchestrator | Wednesday 14 May 2025 02:28:35 +0000 (0:00:01.075) 0:03:43.024 ********* 2025-05-14 02:38:13.232684 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.232690 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.232700 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.232711 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.232721 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.232731 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.232741 | orchestrator | 2025-05-14 02:38:13.232751 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-14 02:38:13.232762 | orchestrator | Wednesday 14 May 2025 02:28:36 +0000 (0:00:00.755) 0:03:43.780 ********* 2025-05-14 02:38:13.232768 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.232774 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.232780 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.232786 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.232793 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.232799 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.232805 | orchestrator | 2025-05-14 02:38:13.232811 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-14 02:38:13.232817 | orchestrator | Wednesday 14 May 2025 02:28:37 +0000 (0:00:00.907) 0:03:44.687 ********* 2025-05-14 02:38:13.232823 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.232829 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.232835 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.232841 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.232847 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.232852 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.232858 | orchestrator | 2025-05-14 02:38:13.232864 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-14 02:38:13.232871 | orchestrator | Wednesday 14 May 2025 02:28:38 +0000 (0:00:00.662) 0:03:45.349 ********* 2025-05-14 02:38:13.232877 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.232882 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.232888 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.232894 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.232900 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.232906 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.232912 | orchestrator | 2025-05-14 02:38:13.232918 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-14 02:38:13.232925 | orchestrator | Wednesday 14 May 2025 02:28:38 +0000 (0:00:00.753) 0:03:46.103 ********* 2025-05-14 02:38:13.232957 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.232963 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.232969 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.232975 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.232981 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.232988 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.232994 | orchestrator | 2025-05-14 02:38:13.233000 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-14 02:38:13.233011 | orchestrator | Wednesday 14 May 2025 02:28:39 +0000 (0:00:00.576) 0:03:46.680 ********* 2025-05-14 02:38:13.233017 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.233023 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.233029 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.233035 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.233041 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.233047 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.233053 | orchestrator | 2025-05-14 02:38:13.233059 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-14 02:38:13.233066 | orchestrator | Wednesday 14 May 2025 02:28:40 +0000 (0:00:00.746) 0:03:47.426 ********* 2025-05-14 02:38:13.233072 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.233078 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.233084 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.233090 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.233096 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.233102 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.233108 | orchestrator | 2025-05-14 02:38:13.233114 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-14 02:38:13.233120 | orchestrator | Wednesday 14 May 2025 02:28:40 +0000 (0:00:00.584) 0:03:48.011 ********* 2025-05-14 02:38:13.233126 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.233133 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.233139 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.233145 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.233151 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.233157 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.233163 | orchestrator | 2025-05-14 02:38:13.233173 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-14 02:38:13.233179 | orchestrator | Wednesday 14 May 2025 02:28:43 +0000 (0:00:02.278) 0:03:50.289 ********* 2025-05-14 02:38:13.233185 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.233191 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.233197 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.233203 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.233209 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.233216 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.233225 | orchestrator | 2025-05-14 02:38:13.233236 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-14 02:38:13.233246 | orchestrator | Wednesday 14 May 2025 02:28:43 +0000 (0:00:00.574) 0:03:50.864 ********* 2025-05-14 02:38:13.233257 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-14 02:38:13.233266 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-14 02:38:13.233275 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.233285 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-14 02:38:13.233295 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-14 02:38:13.233305 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.233316 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-14 02:38:13.233326 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-14 02:38:13.233337 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-14 02:38:13.233347 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-14 02:38:13.233365 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.233375 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-14 02:38:13.233386 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-14 02:38:13.233396 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.233406 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.233416 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-14 02:38:13.233427 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-14 02:38:13.233437 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.233448 | orchestrator | 2025-05-14 02:38:13.233459 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-14 02:38:13.233469 | orchestrator | Wednesday 14 May 2025 02:28:44 +0000 (0:00:00.748) 0:03:51.613 ********* 2025-05-14 02:38:13.233478 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-14 02:38:13.233489 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-14 02:38:13.233499 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.233508 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-14 02:38:13.233518 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-14 02:38:13.233528 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.233537 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-14 02:38:13.233547 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-14 02:38:13.233557 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.233566 | orchestrator | ok: [testbed-node-3] => (item=osd memory target) 2025-05-14 02:38:13.233576 | orchestrator | ok: [testbed-node-3] => (item=osd_memory_target) 2025-05-14 02:38:13.233631 | orchestrator | ok: [testbed-node-4] => (item=osd memory target) 2025-05-14 02:38:13.233643 | orchestrator | ok: [testbed-node-4] => (item=osd_memory_target) 2025-05-14 02:38:13.233653 | orchestrator | ok: [testbed-node-5] => (item=osd memory target) 2025-05-14 02:38:13.233663 | orchestrator | ok: [testbed-node-5] => (item=osd_memory_target) 2025-05-14 02:38:13.233673 | orchestrator | 2025-05-14 02:38:13.233683 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-14 02:38:13.233693 | orchestrator | Wednesday 14 May 2025 02:28:44 +0000 (0:00:00.645) 0:03:52.259 ********* 2025-05-14 02:38:13.233703 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.233713 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.233722 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.233732 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.233741 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.233751 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.233760 | orchestrator | 2025-05-14 02:38:13.233770 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-14 02:38:13.233779 | orchestrator | Wednesday 14 May 2025 02:28:45 +0000 (0:00:00.831) 0:03:53.090 ********* 2025-05-14 02:38:13.233789 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.233805 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.233815 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.233824 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.233834 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.233843 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.233853 | orchestrator | 2025-05-14 02:38:13.233862 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-14 02:38:13.233872 | orchestrator | Wednesday 14 May 2025 02:28:46 +0000 (0:00:00.601) 0:03:53.692 ********* 2025-05-14 02:38:13.233882 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.233891 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.233900 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.233910 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.233919 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.233929 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.233948 | orchestrator | 2025-05-14 02:38:13.233957 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-14 02:38:13.233967 | orchestrator | Wednesday 14 May 2025 02:28:47 +0000 (0:00:00.675) 0:03:54.367 ********* 2025-05-14 02:38:13.233976 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.233986 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.233995 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.234005 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.234125 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.234141 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.234151 | orchestrator | 2025-05-14 02:38:13.234160 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-14 02:38:13.234176 | orchestrator | Wednesday 14 May 2025 02:28:47 +0000 (0:00:00.555) 0:03:54.923 ********* 2025-05-14 02:38:13.234182 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.234187 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.234192 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.234197 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.234203 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.234208 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.234213 | orchestrator | 2025-05-14 02:38:13.234219 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-14 02:38:13.234224 | orchestrator | Wednesday 14 May 2025 02:28:48 +0000 (0:00:00.739) 0:03:55.662 ********* 2025-05-14 02:38:13.234229 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.234234 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.234240 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.234245 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.234250 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.234258 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.234266 | orchestrator | 2025-05-14 02:38:13.234275 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-14 02:38:13.234281 | orchestrator | Wednesday 14 May 2025 02:28:49 +0000 (0:00:00.699) 0:03:56.362 ********* 2025-05-14 02:38:13.234287 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 02:38:13.234292 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 02:38:13.234297 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 02:38:13.234303 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.234308 | orchestrator | 2025-05-14 02:38:13.234313 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-14 02:38:13.234319 | orchestrator | Wednesday 14 May 2025 02:28:49 +0000 (0:00:00.720) 0:03:57.082 ********* 2025-05-14 02:38:13.234324 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 02:38:13.234330 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 02:38:13.234335 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 02:38:13.234340 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.234345 | orchestrator | 2025-05-14 02:38:13.234351 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-14 02:38:13.234356 | orchestrator | Wednesday 14 May 2025 02:28:50 +0000 (0:00:00.777) 0:03:57.859 ********* 2025-05-14 02:38:13.234361 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 02:38:13.234366 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 02:38:13.234372 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 02:38:13.234377 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.234382 | orchestrator | 2025-05-14 02:38:13.234387 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 02:38:13.234393 | orchestrator | Wednesday 14 May 2025 02:28:51 +0000 (0:00:00.432) 0:03:58.292 ********* 2025-05-14 02:38:13.234398 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.234403 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.234413 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.234418 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.234423 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.234429 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.234434 | orchestrator | 2025-05-14 02:38:13.234440 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-14 02:38:13.234445 | orchestrator | Wednesday 14 May 2025 02:28:51 +0000 (0:00:00.805) 0:03:59.097 ********* 2025-05-14 02:38:13.234450 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-14 02:38:13.234455 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.234461 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-14 02:38:13.234466 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.234471 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-14 02:38:13.234476 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.234482 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-14 02:38:13.234487 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-14 02:38:13.234493 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-14 02:38:13.234498 | orchestrator | 2025-05-14 02:38:13.234503 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-14 02:38:13.234508 | orchestrator | Wednesday 14 May 2025 02:28:53 +0000 (0:00:01.408) 0:04:00.506 ********* 2025-05-14 02:38:13.234514 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.234524 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.234529 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.234535 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.234540 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.234545 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.234550 | orchestrator | 2025-05-14 02:38:13.234556 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 02:38:13.234561 | orchestrator | Wednesday 14 May 2025 02:28:53 +0000 (0:00:00.705) 0:04:01.212 ********* 2025-05-14 02:38:13.234566 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.234572 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.234577 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.234582 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.234603 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.234611 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.234617 | orchestrator | 2025-05-14 02:38:13.234622 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-14 02:38:13.234627 | orchestrator | Wednesday 14 May 2025 02:28:54 +0000 (0:00:00.831) 0:04:02.044 ********* 2025-05-14 02:38:13.234633 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-14 02:38:13.234638 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.234643 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-14 02:38:13.234649 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.234654 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-14 02:38:13.234659 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.234666 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-14 02:38:13.234675 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.234684 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-14 02:38:13.234696 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.234706 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-14 02:38:13.234715 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.234723 | orchestrator | 2025-05-14 02:38:13.234732 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-14 02:38:13.234742 | orchestrator | Wednesday 14 May 2025 02:28:55 +0000 (0:00:00.781) 0:04:02.825 ********* 2025-05-14 02:38:13.234748 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.234753 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.234759 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.234769 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-14 02:38:13.234775 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.234780 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-14 02:38:13.234786 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.234791 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-14 02:38:13.234796 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.234802 | orchestrator | 2025-05-14 02:38:13.234807 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-14 02:38:13.234812 | orchestrator | Wednesday 14 May 2025 02:28:56 +0000 (0:00:00.766) 0:04:03.592 ********* 2025-05-14 02:38:13.234818 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 02:38:13.234823 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 02:38:13.234828 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 02:38:13.234833 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.234839 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-14 02:38:13.234844 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-14 02:38:13.234849 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-14 02:38:13.234882 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.234887 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-14 02:38:13.234893 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-14 02:38:13.234898 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-14 02:38:13.234903 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.234909 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:38:13.234914 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-14 02:38:13.234919 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-14 02:38:13.234924 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:38:13.234930 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-14 02:38:13.234935 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:38:13.234940 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.234945 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-14 02:38:13.234951 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.234983 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-14 02:38:13.234990 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-14 02:38:13.234995 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.235000 | orchestrator | 2025-05-14 02:38:13.235006 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-14 02:38:13.235011 | orchestrator | Wednesday 14 May 2025 02:28:57 +0000 (0:00:01.547) 0:04:05.140 ********* 2025-05-14 02:38:13.235016 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:38:13.235022 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:38:13.235027 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:38:13.235032 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:38:13.235037 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:38:13.235042 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:38:13.235048 | orchestrator | 2025-05-14 02:38:13.235058 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-14 02:38:13.235064 | orchestrator | Wednesday 14 May 2025 02:29:02 +0000 (0:00:04.683) 0:04:09.823 ********* 2025-05-14 02:38:13.235070 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:38:13.235075 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:38:13.235080 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:38:13.235090 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:38:13.235096 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:38:13.235101 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:38:13.235106 | orchestrator | 2025-05-14 02:38:13.235111 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-05-14 02:38:13.235117 | orchestrator | Wednesday 14 May 2025 02:29:03 +0000 (0:00:01.083) 0:04:10.907 ********* 2025-05-14 02:38:13.235123 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.235133 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.235140 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.235145 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:38:13.235151 | orchestrator | 2025-05-14 02:38:13.235156 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-05-14 02:38:13.235162 | orchestrator | Wednesday 14 May 2025 02:29:04 +0000 (0:00:01.169) 0:04:12.076 ********* 2025-05-14 02:38:13.235167 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.235172 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.235184 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.235190 | orchestrator | 2025-05-14 02:38:13.235196 | orchestrator | TASK [ceph-handler : set _mon_handler_called before restart] ******************* 2025-05-14 02:38:13.235204 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:38:13.235210 | orchestrator | 2025-05-14 02:38:13.235215 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-05-14 02:38:13.235220 | orchestrator | Wednesday 14 May 2025 02:29:05 +0000 (0:00:01.119) 0:04:13.195 ********* 2025-05-14 02:38:13.235226 | orchestrator | 2025-05-14 02:38:13.235231 | orchestrator | TASK [ceph-handler : copy mon restart script] ********************************** 2025-05-14 02:38:13.235237 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:38:13.235242 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:38:13.235247 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:38:13.235253 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.235258 | orchestrator | 2025-05-14 02:38:13.235264 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-05-14 02:38:13.235269 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:38:13.235274 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:38:13.235279 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:38:13.235285 | orchestrator | 2025-05-14 02:38:13.235290 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-05-14 02:38:13.235296 | orchestrator | Wednesday 14 May 2025 02:29:07 +0000 (0:00:01.192) 0:04:14.388 ********* 2025-05-14 02:38:13.235301 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-14 02:38:13.235306 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-14 02:38:13.235312 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-14 02:38:13.235317 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.235322 | orchestrator | 2025-05-14 02:38:13.235327 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-05-14 02:38:13.235333 | orchestrator | Wednesday 14 May 2025 02:29:08 +0000 (0:00:00.916) 0:04:15.304 ********* 2025-05-14 02:38:13.235338 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.235343 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.235349 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.235354 | orchestrator | 2025-05-14 02:38:13.235359 | orchestrator | TASK [ceph-handler : set _mon_handler_called after restart] ******************** 2025-05-14 02:38:13.235365 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.235370 | orchestrator | 2025-05-14 02:38:13.235376 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-05-14 02:38:13.235381 | orchestrator | Wednesday 14 May 2025 02:29:08 +0000 (0:00:00.875) 0:04:16.180 ********* 2025-05-14 02:38:13.235394 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.235400 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.235405 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.235410 | orchestrator | 2025-05-14 02:38:13.235415 | orchestrator | TASK [ceph-handler : osds handler] ********************************************* 2025-05-14 02:38:13.235421 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.235426 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.235431 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.235436 | orchestrator | 2025-05-14 02:38:13.235442 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-05-14 02:38:13.235447 | orchestrator | Wednesday 14 May 2025 02:29:09 +0000 (0:00:00.833) 0:04:17.013 ********* 2025-05-14 02:38:13.235452 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.235458 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.235463 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.235468 | orchestrator | 2025-05-14 02:38:13.235473 | orchestrator | TASK [ceph-handler : mdss handler] ********************************************* 2025-05-14 02:38:13.235479 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.235484 | orchestrator | 2025-05-14 02:38:13.235489 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-05-14 02:38:13.235495 | orchestrator | Wednesday 14 May 2025 02:29:10 +0000 (0:00:00.824) 0:04:17.838 ********* 2025-05-14 02:38:13.235500 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.235506 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.235511 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.235516 | orchestrator | 2025-05-14 02:38:13.235521 | orchestrator | TASK [ceph-handler : rgws handler] ********************************************* 2025-05-14 02:38:13.235527 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.235532 | orchestrator | 2025-05-14 02:38:13.235537 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-05-14 02:38:13.235542 | orchestrator | Wednesday 14 May 2025 02:29:11 +0000 (0:00:00.798) 0:04:18.637 ********* 2025-05-14 02:38:13.235548 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.235553 | orchestrator | 2025-05-14 02:38:13.235562 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-05-14 02:38:13.235568 | orchestrator | Wednesday 14 May 2025 02:29:11 +0000 (0:00:00.159) 0:04:18.796 ********* 2025-05-14 02:38:13.235573 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.235579 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.235584 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.235613 | orchestrator | 2025-05-14 02:38:13.235622 | orchestrator | TASK [ceph-handler : rbdmirrors handler] *************************************** 2025-05-14 02:38:13.235630 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.235640 | orchestrator | 2025-05-14 02:38:13.235648 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-05-14 02:38:13.235656 | orchestrator | Wednesday 14 May 2025 02:29:12 +0000 (0:00:00.768) 0:04:19.565 ********* 2025-05-14 02:38:13.235665 | orchestrator | 2025-05-14 02:38:13.235674 | orchestrator | TASK [ceph-handler : mgrs handler] ********************************************* 2025-05-14 02:38:13.235682 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.235691 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:38:13.235699 | orchestrator | 2025-05-14 02:38:13.235708 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-05-14 02:38:13.235716 | orchestrator | Wednesday 14 May 2025 02:29:13 +0000 (0:00:00.819) 0:04:20.384 ********* 2025-05-14 02:38:13.235725 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.235734 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.235742 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.235750 | orchestrator | 2025-05-14 02:38:13.235759 | orchestrator | TASK [ceph-handler : set _mgr_handler_called before restart] ******************* 2025-05-14 02:38:13.235772 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:38:13.235790 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:38:13.235799 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:38:13.235808 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.235817 | orchestrator | 2025-05-14 02:38:13.235826 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-14 02:38:13.235834 | orchestrator | Wednesday 14 May 2025 02:29:14 +0000 (0:00:01.219) 0:04:21.604 ********* 2025-05-14 02:38:13.235843 | orchestrator | 2025-05-14 02:38:13.235852 | orchestrator | TASK [ceph-handler : copy mgr restart script] ********************************** 2025-05-14 02:38:13.235860 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.235869 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.235878 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.235887 | orchestrator | 2025-05-14 02:38:13.235895 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-14 02:38:13.235903 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:38:13.235913 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:38:13.235921 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:38:13.235930 | orchestrator | 2025-05-14 02:38:13.235939 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-05-14 02:38:13.235947 | orchestrator | Wednesday 14 May 2025 02:29:15 +0000 (0:00:01.238) 0:04:22.842 ********* 2025-05-14 02:38:13.235956 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-14 02:38:13.235965 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-14 02:38:13.235974 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-14 02:38:13.235983 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.235992 | orchestrator | 2025-05-14 02:38:13.236000 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-05-14 02:38:13.236010 | orchestrator | Wednesday 14 May 2025 02:29:16 +0000 (0:00:00.728) 0:04:23.570 ********* 2025-05-14 02:38:13.236019 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.236028 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.236037 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.236045 | orchestrator | 2025-05-14 02:38:13.236053 | orchestrator | TASK [ceph-handler : set _mgr_handler_called after restart] ******************** 2025-05-14 02:38:13.236062 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.236071 | orchestrator | 2025-05-14 02:38:13.236080 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-05-14 02:38:13.236088 | orchestrator | Wednesday 14 May 2025 02:29:17 +0000 (0:00:00.809) 0:04:24.380 ********* 2025-05-14 02:38:13.236097 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:38:13.236106 | orchestrator | 2025-05-14 02:38:13.236114 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-05-14 02:38:13.236123 | orchestrator | Wednesday 14 May 2025 02:29:17 +0000 (0:00:00.480) 0:04:24.861 ********* 2025-05-14 02:38:13.236131 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.236140 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.236149 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.236159 | orchestrator | 2025-05-14 02:38:13.236167 | orchestrator | TASK [ceph-handler : rbd-target-api and rbd-target-gw handler] ***************** 2025-05-14 02:38:13.236176 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.236185 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.236194 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.236202 | orchestrator | 2025-05-14 02:38:13.236210 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-05-14 02:38:13.236219 | orchestrator | Wednesday 14 May 2025 02:29:18 +0000 (0:00:01.073) 0:04:25.935 ********* 2025-05-14 02:38:13.236228 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:38:13.236236 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:38:13.236245 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:38:13.236260 | orchestrator | 2025-05-14 02:38:13.236269 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-14 02:38:13.236279 | orchestrator | Wednesday 14 May 2025 02:29:19 +0000 (0:00:01.194) 0:04:27.129 ********* 2025-05-14 02:38:13.236288 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:38:13.236296 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:38:13.236305 | orchestrator | 2025-05-14 02:38:13.236314 | orchestrator | TASK [ceph-handler : remove tempdir for scripts] ******************************* 2025-05-14 02:38:13.236322 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:38:13.236331 | orchestrator | 2025-05-14 02:38:13.236346 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-14 02:38:13.236354 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:38:13.236363 | orchestrator | 2025-05-14 02:38:13.236372 | orchestrator | TASK [ceph-handler : remove tempdir for scripts] ******************************* 2025-05-14 02:38:13.236381 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:38:13.236389 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:38:13.236398 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.236407 | orchestrator | 2025-05-14 02:38:13.236415 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-05-14 02:38:13.236424 | orchestrator | Wednesday 14 May 2025 02:29:21 +0000 (0:00:01.556) 0:04:28.686 ********* 2025-05-14 02:38:13.236433 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.236441 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.236450 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.236459 | orchestrator | 2025-05-14 02:38:13.236467 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-05-14 02:38:13.236476 | orchestrator | Wednesday 14 May 2025 02:29:22 +0000 (0:00:00.909) 0:04:29.595 ********* 2025-05-14 02:38:13.236485 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:38:13.236494 | orchestrator | 2025-05-14 02:38:13.236502 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-05-14 02:38:13.236511 | orchestrator | Wednesday 14 May 2025 02:29:22 +0000 (0:00:00.496) 0:04:30.092 ********* 2025-05-14 02:38:13.236520 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.236529 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.236537 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.236550 | orchestrator | 2025-05-14 02:38:13.236559 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-05-14 02:38:13.236568 | orchestrator | Wednesday 14 May 2025 02:29:23 +0000 (0:00:00.360) 0:04:30.452 ********* 2025-05-14 02:38:13.236576 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:38:13.236585 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:38:13.236609 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:38:13.236618 | orchestrator | 2025-05-14 02:38:13.236627 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-05-14 02:38:13.236635 | orchestrator | Wednesday 14 May 2025 02:29:24 +0000 (0:00:01.433) 0:04:31.885 ********* 2025-05-14 02:38:13.236644 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:38:13.236653 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:38:13.236661 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:38:13.236670 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.236679 | orchestrator | 2025-05-14 02:38:13.236688 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-05-14 02:38:13.236696 | orchestrator | Wednesday 14 May 2025 02:29:25 +0000 (0:00:01.018) 0:04:32.904 ********* 2025-05-14 02:38:13.236705 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.236713 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.236722 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.236730 | orchestrator | 2025-05-14 02:38:13.236738 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-05-14 02:38:13.236752 | orchestrator | Wednesday 14 May 2025 02:29:26 +0000 (0:00:00.556) 0:04:33.461 ********* 2025-05-14 02:38:13.236760 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.236769 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.236777 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.236785 | orchestrator | 2025-05-14 02:38:13.236794 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-05-14 02:38:13.236802 | orchestrator | Wednesday 14 May 2025 02:29:26 +0000 (0:00:00.413) 0:04:33.875 ********* 2025-05-14 02:38:13.236810 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.236819 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.236827 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.236835 | orchestrator | 2025-05-14 02:38:13.236843 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-05-14 02:38:13.236852 | orchestrator | Wednesday 14 May 2025 02:29:27 +0000 (0:00:00.736) 0:04:34.612 ********* 2025-05-14 02:38:13.236860 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.236868 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.236877 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.236885 | orchestrator | 2025-05-14 02:38:13.236893 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-14 02:38:13.236901 | orchestrator | Wednesday 14 May 2025 02:29:27 +0000 (0:00:00.366) 0:04:34.978 ********* 2025-05-14 02:38:13.236910 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:38:13.236918 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:38:13.236926 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:38:13.236934 | orchestrator | 2025-05-14 02:38:13.236943 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-05-14 02:38:13.236951 | orchestrator | 2025-05-14 02:38:13.236959 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-14 02:38:13.236968 | orchestrator | Wednesday 14 May 2025 02:29:30 +0000 (0:00:02.528) 0:04:37.507 ********* 2025-05-14 02:38:13.236976 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:38:13.236984 | orchestrator | 2025-05-14 02:38:13.236993 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-14 02:38:13.237001 | orchestrator | Wednesday 14 May 2025 02:29:31 +0000 (0:00:00.920) 0:04:38.428 ********* 2025-05-14 02:38:13.237009 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.237018 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.237026 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.237035 | orchestrator | 2025-05-14 02:38:13.237043 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-14 02:38:13.237051 | orchestrator | Wednesday 14 May 2025 02:29:32 +0000 (0:00:00.862) 0:04:39.290 ********* 2025-05-14 02:38:13.237060 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.237068 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.237082 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.237091 | orchestrator | 2025-05-14 02:38:13.237099 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-14 02:38:13.237107 | orchestrator | Wednesday 14 May 2025 02:29:32 +0000 (0:00:00.565) 0:04:39.856 ********* 2025-05-14 02:38:13.237116 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.237124 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.237132 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.237140 | orchestrator | 2025-05-14 02:38:13.237149 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-14 02:38:13.237157 | orchestrator | Wednesday 14 May 2025 02:29:32 +0000 (0:00:00.359) 0:04:40.216 ********* 2025-05-14 02:38:13.237165 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.237174 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.237183 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.237192 | orchestrator | 2025-05-14 02:38:13.237200 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-14 02:38:13.237215 | orchestrator | Wednesday 14 May 2025 02:29:33 +0000 (0:00:00.340) 0:04:40.556 ********* 2025-05-14 02:38:13.237225 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.237234 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.237243 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.237252 | orchestrator | 2025-05-14 02:38:13.237260 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-14 02:38:13.237266 | orchestrator | Wednesday 14 May 2025 02:29:34 +0000 (0:00:00.752) 0:04:41.309 ********* 2025-05-14 02:38:13.237271 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.237277 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.237282 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.237287 | orchestrator | 2025-05-14 02:38:13.237295 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-14 02:38:13.237301 | orchestrator | Wednesday 14 May 2025 02:29:34 +0000 (0:00:00.605) 0:04:41.915 ********* 2025-05-14 02:38:13.237306 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.237311 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.237316 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.237322 | orchestrator | 2025-05-14 02:38:13.237327 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-14 02:38:13.237332 | orchestrator | Wednesday 14 May 2025 02:29:35 +0000 (0:00:00.397) 0:04:42.312 ********* 2025-05-14 02:38:13.237337 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.237342 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.237348 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.237353 | orchestrator | 2025-05-14 02:38:13.237358 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-14 02:38:13.237363 | orchestrator | Wednesday 14 May 2025 02:29:35 +0000 (0:00:00.337) 0:04:42.649 ********* 2025-05-14 02:38:13.237369 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.237374 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.237379 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.237384 | orchestrator | 2025-05-14 02:38:13.237389 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-14 02:38:13.237395 | orchestrator | Wednesday 14 May 2025 02:29:35 +0000 (0:00:00.362) 0:04:43.012 ********* 2025-05-14 02:38:13.237400 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.237405 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.237410 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.237416 | orchestrator | 2025-05-14 02:38:13.237421 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-14 02:38:13.237426 | orchestrator | Wednesday 14 May 2025 02:29:36 +0000 (0:00:00.595) 0:04:43.608 ********* 2025-05-14 02:38:13.237432 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.237437 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.237443 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.237448 | orchestrator | 2025-05-14 02:38:13.237453 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-14 02:38:13.237458 | orchestrator | Wednesday 14 May 2025 02:29:37 +0000 (0:00:00.739) 0:04:44.348 ********* 2025-05-14 02:38:13.237464 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.237469 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.237474 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.237479 | orchestrator | 2025-05-14 02:38:13.237485 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-14 02:38:13.237490 | orchestrator | Wednesday 14 May 2025 02:29:37 +0000 (0:00:00.355) 0:04:44.703 ********* 2025-05-14 02:38:13.237495 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.237501 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.237506 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.237511 | orchestrator | 2025-05-14 02:38:13.237516 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-14 02:38:13.237522 | orchestrator | Wednesday 14 May 2025 02:29:37 +0000 (0:00:00.333) 0:04:45.036 ********* 2025-05-14 02:38:13.237532 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.237537 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.237542 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.237548 | orchestrator | 2025-05-14 02:38:13.237553 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-14 02:38:13.237558 | orchestrator | Wednesday 14 May 2025 02:29:38 +0000 (0:00:00.599) 0:04:45.636 ********* 2025-05-14 02:38:13.237563 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.237569 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.237574 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.237579 | orchestrator | 2025-05-14 02:38:13.237584 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-14 02:38:13.237623 | orchestrator | Wednesday 14 May 2025 02:29:38 +0000 (0:00:00.374) 0:04:46.010 ********* 2025-05-14 02:38:13.237628 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.237634 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.237639 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.237644 | orchestrator | 2025-05-14 02:38:13.237650 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-14 02:38:13.237655 | orchestrator | Wednesday 14 May 2025 02:29:39 +0000 (0:00:00.394) 0:04:46.405 ********* 2025-05-14 02:38:13.237661 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.237666 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.237677 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.237687 | orchestrator | 2025-05-14 02:38:13.237697 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-14 02:38:13.237706 | orchestrator | Wednesday 14 May 2025 02:29:39 +0000 (0:00:00.420) 0:04:46.826 ********* 2025-05-14 02:38:13.237714 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.237723 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.237733 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.237742 | orchestrator | 2025-05-14 02:38:13.237750 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-14 02:38:13.237760 | orchestrator | Wednesday 14 May 2025 02:29:40 +0000 (0:00:00.764) 0:04:47.591 ********* 2025-05-14 02:38:13.237766 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.237771 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.237777 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.237782 | orchestrator | 2025-05-14 02:38:13.237787 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-14 02:38:13.237793 | orchestrator | Wednesday 14 May 2025 02:29:40 +0000 (0:00:00.398) 0:04:47.989 ********* 2025-05-14 02:38:13.237798 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.237803 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.237808 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.237813 | orchestrator | 2025-05-14 02:38:13.237819 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-14 02:38:13.237824 | orchestrator | Wednesday 14 May 2025 02:29:41 +0000 (0:00:00.449) 0:04:48.438 ********* 2025-05-14 02:38:13.237829 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.237835 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.237840 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.237845 | orchestrator | 2025-05-14 02:38:13.237854 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-14 02:38:13.237859 | orchestrator | Wednesday 14 May 2025 02:29:41 +0000 (0:00:00.382) 0:04:48.821 ********* 2025-05-14 02:38:13.237865 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.237870 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.237875 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.237880 | orchestrator | 2025-05-14 02:38:13.237886 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-14 02:38:13.237891 | orchestrator | Wednesday 14 May 2025 02:29:42 +0000 (0:00:00.687) 0:04:49.508 ********* 2025-05-14 02:38:13.237896 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.237906 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.237912 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.237917 | orchestrator | 2025-05-14 02:38:13.237922 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-14 02:38:13.237927 | orchestrator | Wednesday 14 May 2025 02:29:42 +0000 (0:00:00.405) 0:04:49.914 ********* 2025-05-14 02:38:13.237933 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.237938 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.237943 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.237949 | orchestrator | 2025-05-14 02:38:13.237954 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-14 02:38:13.237959 | orchestrator | Wednesday 14 May 2025 02:29:43 +0000 (0:00:00.364) 0:04:50.279 ********* 2025-05-14 02:38:13.237965 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.237970 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.237975 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.237981 | orchestrator | 2025-05-14 02:38:13.237986 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-14 02:38:13.237991 | orchestrator | Wednesday 14 May 2025 02:29:43 +0000 (0:00:00.330) 0:04:50.609 ********* 2025-05-14 02:38:13.237997 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.238002 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.238008 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.238013 | orchestrator | 2025-05-14 02:38:13.238048 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-14 02:38:13.238054 | orchestrator | Wednesday 14 May 2025 02:29:43 +0000 (0:00:00.581) 0:04:51.191 ********* 2025-05-14 02:38:13.238059 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.238064 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.238070 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.238075 | orchestrator | 2025-05-14 02:38:13.238080 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-14 02:38:13.238086 | orchestrator | Wednesday 14 May 2025 02:29:44 +0000 (0:00:00.355) 0:04:51.546 ********* 2025-05-14 02:38:13.238092 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.238097 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.238102 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.238108 | orchestrator | 2025-05-14 02:38:13.238113 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-14 02:38:13.238119 | orchestrator | Wednesday 14 May 2025 02:29:44 +0000 (0:00:00.471) 0:04:52.018 ********* 2025-05-14 02:38:13.238124 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.238129 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.238135 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.238140 | orchestrator | 2025-05-14 02:38:13.238145 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-14 02:38:13.238151 | orchestrator | Wednesday 14 May 2025 02:29:45 +0000 (0:00:00.421) 0:04:52.439 ********* 2025-05-14 02:38:13.238155 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.238160 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.238164 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.238169 | orchestrator | 2025-05-14 02:38:13.238174 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-14 02:38:13.238179 | orchestrator | Wednesday 14 May 2025 02:29:45 +0000 (0:00:00.687) 0:04:53.126 ********* 2025-05-14 02:38:13.238183 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.238188 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.238193 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.238197 | orchestrator | 2025-05-14 02:38:13.238202 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-14 02:38:13.238212 | orchestrator | Wednesday 14 May 2025 02:29:46 +0000 (0:00:00.380) 0:04:53.507 ********* 2025-05-14 02:38:13.238223 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.238228 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.238232 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.238237 | orchestrator | 2025-05-14 02:38:13.238242 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-14 02:38:13.238247 | orchestrator | Wednesday 14 May 2025 02:29:46 +0000 (0:00:00.359) 0:04:53.866 ********* 2025-05-14 02:38:13.238252 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-14 02:38:13.238257 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-14 02:38:13.238261 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-14 02:38:13.238266 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-14 02:38:13.238271 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.238276 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.238280 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-14 02:38:13.238285 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-14 02:38:13.238289 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.238294 | orchestrator | 2025-05-14 02:38:13.238299 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-14 02:38:13.238304 | orchestrator | Wednesday 14 May 2025 02:29:47 +0000 (0:00:00.455) 0:04:54.322 ********* 2025-05-14 02:38:13.238309 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-14 02:38:13.238313 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-14 02:38:13.238318 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.238323 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-14 02:38:13.238331 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-14 02:38:13.238335 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.238340 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-14 02:38:13.238345 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-14 02:38:13.238350 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.238354 | orchestrator | 2025-05-14 02:38:13.238359 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-14 02:38:13.238364 | orchestrator | Wednesday 14 May 2025 02:29:47 +0000 (0:00:00.726) 0:04:55.048 ********* 2025-05-14 02:38:13.238369 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.238373 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.238378 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.238383 | orchestrator | 2025-05-14 02:38:13.238387 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-14 02:38:13.238392 | orchestrator | Wednesday 14 May 2025 02:29:48 +0000 (0:00:00.368) 0:04:55.417 ********* 2025-05-14 02:38:13.238397 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.238402 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.238406 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.238411 | orchestrator | 2025-05-14 02:38:13.238416 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-14 02:38:13.238421 | orchestrator | Wednesday 14 May 2025 02:29:48 +0000 (0:00:00.431) 0:04:55.849 ********* 2025-05-14 02:38:13.238426 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.238431 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.238435 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.238440 | orchestrator | 2025-05-14 02:38:13.238445 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-14 02:38:13.238450 | orchestrator | Wednesday 14 May 2025 02:29:48 +0000 (0:00:00.414) 0:04:56.263 ********* 2025-05-14 02:38:13.238454 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.238459 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.238464 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.238469 | orchestrator | 2025-05-14 02:38:13.238477 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-14 02:38:13.238482 | orchestrator | Wednesday 14 May 2025 02:29:49 +0000 (0:00:00.645) 0:04:56.909 ********* 2025-05-14 02:38:13.238488 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.238496 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.238504 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.238512 | orchestrator | 2025-05-14 02:38:13.238520 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-14 02:38:13.238527 | orchestrator | Wednesday 14 May 2025 02:29:49 +0000 (0:00:00.341) 0:04:57.250 ********* 2025-05-14 02:38:13.238535 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.238543 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.238551 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.238558 | orchestrator | 2025-05-14 02:38:13.238566 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-14 02:38:13.238574 | orchestrator | Wednesday 14 May 2025 02:29:50 +0000 (0:00:00.549) 0:04:57.800 ********* 2025-05-14 02:38:13.238582 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 02:38:13.238606 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 02:38:13.238614 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 02:38:13.238621 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.238628 | orchestrator | 2025-05-14 02:38:13.238636 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-14 02:38:13.238644 | orchestrator | Wednesday 14 May 2025 02:29:51 +0000 (0:00:00.484) 0:04:58.285 ********* 2025-05-14 02:38:13.238652 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 02:38:13.238660 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 02:38:13.238668 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 02:38:13.238676 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.238684 | orchestrator | 2025-05-14 02:38:13.238692 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-14 02:38:13.238700 | orchestrator | Wednesday 14 May 2025 02:29:51 +0000 (0:00:00.454) 0:04:58.739 ********* 2025-05-14 02:38:13.238714 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 02:38:13.238721 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 02:38:13.238729 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 02:38:13.238736 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.238743 | orchestrator | 2025-05-14 02:38:13.238751 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 02:38:13.238758 | orchestrator | Wednesday 14 May 2025 02:29:52 +0000 (0:00:00.739) 0:04:59.478 ********* 2025-05-14 02:38:13.238765 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.238773 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.238780 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.238788 | orchestrator | 2025-05-14 02:38:13.238795 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-14 02:38:13.238802 | orchestrator | Wednesday 14 May 2025 02:29:52 +0000 (0:00:00.660) 0:05:00.139 ********* 2025-05-14 02:38:13.238810 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-14 02:38:13.238817 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.238824 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-14 02:38:13.238832 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.238839 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-14 02:38:13.238846 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.238854 | orchestrator | 2025-05-14 02:38:13.238861 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-14 02:38:13.238869 | orchestrator | Wednesday 14 May 2025 02:29:53 +0000 (0:00:00.524) 0:05:00.663 ********* 2025-05-14 02:38:13.238876 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.238890 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.238905 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.238913 | orchestrator | 2025-05-14 02:38:13.238920 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 02:38:13.238928 | orchestrator | Wednesday 14 May 2025 02:29:53 +0000 (0:00:00.338) 0:05:01.002 ********* 2025-05-14 02:38:13.239003 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.239011 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.239018 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.239025 | orchestrator | 2025-05-14 02:38:13.239033 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-14 02:38:13.239040 | orchestrator | Wednesday 14 May 2025 02:29:54 +0000 (0:00:00.358) 0:05:01.361 ********* 2025-05-14 02:38:13.239048 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-14 02:38:13.239055 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.239063 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-14 02:38:13.239070 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.239078 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-14 02:38:13.239085 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.239092 | orchestrator | 2025-05-14 02:38:13.239100 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-14 02:38:13.239107 | orchestrator | Wednesday 14 May 2025 02:29:54 +0000 (0:00:00.832) 0:05:02.194 ********* 2025-05-14 02:38:13.239199 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.239211 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.239220 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.239225 | orchestrator | 2025-05-14 02:38:13.239229 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-14 02:38:13.239234 | orchestrator | Wednesday 14 May 2025 02:29:55 +0000 (0:00:00.338) 0:05:02.532 ********* 2025-05-14 02:38:13.239239 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 02:38:13.239244 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 02:38:13.239249 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 02:38:13.239253 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.239258 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-14 02:38:13.239263 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-14 02:38:13.239267 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-14 02:38:13.239272 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.239277 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-14 02:38:13.239281 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-14 02:38:13.239286 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-14 02:38:13.239291 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.239295 | orchestrator | 2025-05-14 02:38:13.239300 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-14 02:38:13.239305 | orchestrator | Wednesday 14 May 2025 02:29:56 +0000 (0:00:00.887) 0:05:03.420 ********* 2025-05-14 02:38:13.239310 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.239314 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.239319 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.239324 | orchestrator | 2025-05-14 02:38:13.239328 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-14 02:38:13.239333 | orchestrator | Wednesday 14 May 2025 02:29:56 +0000 (0:00:00.509) 0:05:03.929 ********* 2025-05-14 02:38:13.239338 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.239343 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.239347 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.239352 | orchestrator | 2025-05-14 02:38:13.239357 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-14 02:38:13.239361 | orchestrator | Wednesday 14 May 2025 02:29:57 +0000 (0:00:00.624) 0:05:04.553 ********* 2025-05-14 02:38:13.239381 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.239387 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.239391 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.239396 | orchestrator | 2025-05-14 02:38:13.239401 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-14 02:38:13.239405 | orchestrator | Wednesday 14 May 2025 02:29:57 +0000 (0:00:00.564) 0:05:05.117 ********* 2025-05-14 02:38:13.239410 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.239415 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.239425 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.239430 | orchestrator | 2025-05-14 02:38:13.239435 | orchestrator | TASK [ceph-mon : set_fact container_exec_cmd] ********************************** 2025-05-14 02:38:13.239440 | orchestrator | Wednesday 14 May 2025 02:29:58 +0000 (0:00:00.774) 0:05:05.892 ********* 2025-05-14 02:38:13.239445 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.239449 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.239454 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.239459 | orchestrator | 2025-05-14 02:38:13.239463 | orchestrator | TASK [ceph-mon : include deploy_monitors.yml] ********************************** 2025-05-14 02:38:13.239468 | orchestrator | Wednesday 14 May 2025 02:29:59 +0000 (0:00:00.423) 0:05:06.316 ********* 2025-05-14 02:38:13.239480 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:38:13.239485 | orchestrator | 2025-05-14 02:38:13.239489 | orchestrator | TASK [ceph-mon : check if monitor initial keyring already exists] ************** 2025-05-14 02:38:13.239494 | orchestrator | Wednesday 14 May 2025 02:29:59 +0000 (0:00:00.580) 0:05:06.896 ********* 2025-05-14 02:38:13.239499 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.239504 | orchestrator | 2025-05-14 02:38:13.239508 | orchestrator | TASK [ceph-mon : generate monitor initial keyring] ***************************** 2025-05-14 02:38:13.239513 | orchestrator | Wednesday 14 May 2025 02:29:59 +0000 (0:00:00.179) 0:05:07.075 ********* 2025-05-14 02:38:13.239518 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-14 02:38:13.239522 | orchestrator | 2025-05-14 02:38:13.239527 | orchestrator | TASK [ceph-mon : set_fact _initial_mon_key_success] **************************** 2025-05-14 02:38:13.239532 | orchestrator | Wednesday 14 May 2025 02:30:00 +0000 (0:00:00.799) 0:05:07.875 ********* 2025-05-14 02:38:13.239540 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.239545 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.239550 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.239554 | orchestrator | 2025-05-14 02:38:13.239559 | orchestrator | TASK [ceph-mon : get initial keyring when it already exists] ******************* 2025-05-14 02:38:13.239564 | orchestrator | Wednesday 14 May 2025 02:30:01 +0000 (0:00:00.664) 0:05:08.540 ********* 2025-05-14 02:38:13.239569 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.239573 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.239578 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.239583 | orchestrator | 2025-05-14 02:38:13.239630 | orchestrator | TASK [ceph-mon : create monitor initial keyring] ******************************* 2025-05-14 02:38:13.239635 | orchestrator | Wednesday 14 May 2025 02:30:01 +0000 (0:00:00.426) 0:05:08.966 ********* 2025-05-14 02:38:13.239640 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:38:13.239645 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:38:13.239650 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:38:13.239654 | orchestrator | 2025-05-14 02:38:13.239659 | orchestrator | TASK [ceph-mon : copy the initial key in /etc/ceph (for containers)] *********** 2025-05-14 02:38:13.239664 | orchestrator | Wednesday 14 May 2025 02:30:02 +0000 (0:00:01.205) 0:05:10.172 ********* 2025-05-14 02:38:13.239669 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:38:13.239673 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:38:13.239678 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:38:13.239685 | orchestrator | 2025-05-14 02:38:13.239694 | orchestrator | TASK [ceph-mon : create monitor directory] ************************************* 2025-05-14 02:38:13.239707 | orchestrator | Wednesday 14 May 2025 02:30:03 +0000 (0:00:00.991) 0:05:11.163 ********* 2025-05-14 02:38:13.239715 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:38:13.239723 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:38:13.239732 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:38:13.239759 | orchestrator | 2025-05-14 02:38:13.239765 | orchestrator | TASK [ceph-mon : recursively fix ownership of monitor directory] *************** 2025-05-14 02:38:13.239769 | orchestrator | Wednesday 14 May 2025 02:30:04 +0000 (0:00:00.746) 0:05:11.910 ********* 2025-05-14 02:38:13.239774 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.239779 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.239784 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.239788 | orchestrator | 2025-05-14 02:38:13.239793 | orchestrator | TASK [ceph-mon : create custom admin keyring] ********************************** 2025-05-14 02:38:13.239798 | orchestrator | Wednesday 14 May 2025 02:30:05 +0000 (0:00:00.745) 0:05:12.656 ********* 2025-05-14 02:38:13.239803 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.239807 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.239812 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.239816 | orchestrator | 2025-05-14 02:38:13.239821 | orchestrator | TASK [ceph-mon : set_fact ceph-authtool container command] ********************* 2025-05-14 02:38:13.239826 | orchestrator | Wednesday 14 May 2025 02:30:05 +0000 (0:00:00.356) 0:05:13.012 ********* 2025-05-14 02:38:13.239831 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.239836 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.239840 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.239845 | orchestrator | 2025-05-14 02:38:13.239850 | orchestrator | TASK [ceph-mon : import admin keyring into mon keyring] ************************ 2025-05-14 02:38:13.239855 | orchestrator | Wednesday 14 May 2025 02:30:06 +0000 (0:00:00.634) 0:05:13.646 ********* 2025-05-14 02:38:13.239859 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.239864 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.239869 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.239874 | orchestrator | 2025-05-14 02:38:13.239878 | orchestrator | TASK [ceph-mon : set_fact ceph-mon container command] ************************** 2025-05-14 02:38:13.239883 | orchestrator | Wednesday 14 May 2025 02:30:06 +0000 (0:00:00.370) 0:05:14.017 ********* 2025-05-14 02:38:13.239888 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.239893 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.239897 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.239902 | orchestrator | 2025-05-14 02:38:13.239907 | orchestrator | TASK [ceph-mon : ceph monitor mkfs with keyring] ******************************* 2025-05-14 02:38:13.239912 | orchestrator | Wednesday 14 May 2025 02:30:07 +0000 (0:00:00.377) 0:05:14.394 ********* 2025-05-14 02:38:13.239918 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:38:13.239926 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:38:13.239933 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:38:13.239941 | orchestrator | 2025-05-14 02:38:13.239949 | orchestrator | TASK [ceph-mon : ceph monitor mkfs without keyring] **************************** 2025-05-14 02:38:13.239962 | orchestrator | Wednesday 14 May 2025 02:30:08 +0000 (0:00:01.269) 0:05:15.663 ********* 2025-05-14 02:38:13.239970 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.239977 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.239985 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.239992 | orchestrator | 2025-05-14 02:38:13.240000 | orchestrator | TASK [ceph-mon : include start_monitor.yml] ************************************ 2025-05-14 02:38:13.240008 | orchestrator | Wednesday 14 May 2025 02:30:09 +0000 (0:00:00.624) 0:05:16.288 ********* 2025-05-14 02:38:13.240016 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:38:13.240024 | orchestrator | 2025-05-14 02:38:13.240032 | orchestrator | TASK [ceph-mon : ensure systemd service override directory exists] ************* 2025-05-14 02:38:13.240040 | orchestrator | Wednesday 14 May 2025 02:30:09 +0000 (0:00:00.592) 0:05:16.880 ********* 2025-05-14 02:38:13.240052 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.240061 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.240127 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.240137 | orchestrator | 2025-05-14 02:38:13.240144 | orchestrator | TASK [ceph-mon : add ceph-mon systemd service overrides] *********************** 2025-05-14 02:38:13.240175 | orchestrator | Wednesday 14 May 2025 02:30:09 +0000 (0:00:00.359) 0:05:17.240 ********* 2025-05-14 02:38:13.240183 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.240190 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.240197 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.240204 | orchestrator | 2025-05-14 02:38:13.240211 | orchestrator | TASK [ceph-mon : include_tasks systemd.yml] ************************************ 2025-05-14 02:38:13.240217 | orchestrator | Wednesday 14 May 2025 02:30:10 +0000 (0:00:00.610) 0:05:17.850 ********* 2025-05-14 02:38:13.240229 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:38:13.240236 | orchestrator | 2025-05-14 02:38:13.240243 | orchestrator | TASK [ceph-mon : generate systemd unit file for mon container] ***************** 2025-05-14 02:38:13.240250 | orchestrator | Wednesday 14 May 2025 02:30:11 +0000 (0:00:00.566) 0:05:18.416 ********* 2025-05-14 02:38:13.240257 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:38:13.240265 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:38:13.240272 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:38:13.240279 | orchestrator | 2025-05-14 02:38:13.240285 | orchestrator | TASK [ceph-mon : generate systemd ceph-mon target file] ************************ 2025-05-14 02:38:13.240292 | orchestrator | Wednesday 14 May 2025 02:30:12 +0000 (0:00:01.491) 0:05:19.907 ********* 2025-05-14 02:38:13.240298 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:38:13.240305 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:38:13.240312 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:38:13.240319 | orchestrator | 2025-05-14 02:38:13.240325 | orchestrator | TASK [ceph-mon : enable ceph-mon.target] *************************************** 2025-05-14 02:38:13.240332 | orchestrator | Wednesday 14 May 2025 02:30:13 +0000 (0:00:01.205) 0:05:21.113 ********* 2025-05-14 02:38:13.240340 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:38:13.240347 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:38:13.240354 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:38:13.240361 | orchestrator | 2025-05-14 02:38:13.240368 | orchestrator | TASK [ceph-mon : start the monitor service] ************************************ 2025-05-14 02:38:13.240375 | orchestrator | Wednesday 14 May 2025 02:30:15 +0000 (0:00:01.716) 0:05:22.829 ********* 2025-05-14 02:38:13.240381 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:38:13.240389 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:38:13.240395 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:38:13.240403 | orchestrator | 2025-05-14 02:38:13.240410 | orchestrator | TASK [ceph-mon : include_tasks ceph_keys.yml] ********************************** 2025-05-14 02:38:13.240417 | orchestrator | Wednesday 14 May 2025 02:30:17 +0000 (0:00:02.069) 0:05:24.899 ********* 2025-05-14 02:38:13.240425 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:38:13.240432 | orchestrator | 2025-05-14 02:38:13.240439 | orchestrator | TASK [ceph-mon : waiting for the monitor(s) to form the quorum...] ************* 2025-05-14 02:38:13.240446 | orchestrator | Wednesday 14 May 2025 02:30:18 +0000 (0:00:00.633) 0:05:25.532 ********* 2025-05-14 02:38:13.240453 | orchestrator | FAILED - RETRYING: [testbed-node-0]: waiting for the monitor(s) to form the quorum... (10 retries left). 2025-05-14 02:38:13.240460 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.240467 | orchestrator | 2025-05-14 02:38:13.240473 | orchestrator | TASK [ceph-mon : fetch ceph initial keys] ************************************** 2025-05-14 02:38:13.240481 | orchestrator | Wednesday 14 May 2025 02:30:39 +0000 (0:00:21.507) 0:05:47.040 ********* 2025-05-14 02:38:13.240488 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.240494 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.240502 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.240515 | orchestrator | 2025-05-14 02:38:13.240522 | orchestrator | TASK [ceph-mon : include secure_cluster.yml] *********************************** 2025-05-14 02:38:13.240529 | orchestrator | Wednesday 14 May 2025 02:30:47 +0000 (0:00:07.610) 0:05:54.651 ********* 2025-05-14 02:38:13.240536 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.240543 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.240550 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.240557 | orchestrator | 2025-05-14 02:38:13.240564 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-14 02:38:13.240571 | orchestrator | Wednesday 14 May 2025 02:30:48 +0000 (0:00:01.131) 0:05:55.782 ********* 2025-05-14 02:38:13.240578 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:38:13.240603 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:38:13.240610 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:38:13.240618 | orchestrator | 2025-05-14 02:38:13.240625 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-05-14 02:38:13.240631 | orchestrator | Wednesday 14 May 2025 02:30:49 +0000 (0:00:00.725) 0:05:56.508 ********* 2025-05-14 02:38:13.240639 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:38:13.240646 | orchestrator | 2025-05-14 02:38:13.240653 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-05-14 02:38:13.240667 | orchestrator | Wednesday 14 May 2025 02:30:50 +0000 (0:00:00.768) 0:05:57.276 ********* 2025-05-14 02:38:13.240674 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.240681 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.240688 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.240695 | orchestrator | 2025-05-14 02:38:13.240703 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-05-14 02:38:13.240710 | orchestrator | Wednesday 14 May 2025 02:30:50 +0000 (0:00:00.406) 0:05:57.683 ********* 2025-05-14 02:38:13.240729 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:38:13.240736 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:38:13.240743 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:38:13.240750 | orchestrator | 2025-05-14 02:38:13.240757 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-05-14 02:38:13.240764 | orchestrator | Wednesday 14 May 2025 02:30:51 +0000 (0:00:01.303) 0:05:58.986 ********* 2025-05-14 02:38:13.240772 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-14 02:38:13.240779 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-14 02:38:13.240787 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-14 02:38:13.240794 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.240801 | orchestrator | 2025-05-14 02:38:13.240808 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-05-14 02:38:13.240816 | orchestrator | Wednesday 14 May 2025 02:30:52 +0000 (0:00:01.199) 0:06:00.186 ********* 2025-05-14 02:38:13.240821 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.240826 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.240830 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.240835 | orchestrator | 2025-05-14 02:38:13.240844 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-14 02:38:13.240848 | orchestrator | Wednesday 14 May 2025 02:30:53 +0000 (0:00:00.405) 0:06:00.592 ********* 2025-05-14 02:38:13.240853 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:38:13.240857 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:38:13.240901 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:38:13.240907 | orchestrator | 2025-05-14 02:38:13.240911 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-05-14 02:38:13.240916 | orchestrator | 2025-05-14 02:38:13.240920 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-14 02:38:13.240924 | orchestrator | Wednesday 14 May 2025 02:30:55 +0000 (0:00:01.962) 0:06:02.554 ********* 2025-05-14 02:38:13.240929 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:38:13.240939 | orchestrator | 2025-05-14 02:38:13.240943 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-14 02:38:13.240948 | orchestrator | Wednesday 14 May 2025 02:30:55 +0000 (0:00:00.628) 0:06:03.183 ********* 2025-05-14 02:38:13.240952 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.240957 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.240961 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.240966 | orchestrator | 2025-05-14 02:38:13.240971 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-14 02:38:13.240975 | orchestrator | Wednesday 14 May 2025 02:30:56 +0000 (0:00:00.654) 0:06:03.837 ********* 2025-05-14 02:38:13.240980 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.240984 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.240989 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.240993 | orchestrator | 2025-05-14 02:38:13.240998 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-14 02:38:13.241002 | orchestrator | Wednesday 14 May 2025 02:30:56 +0000 (0:00:00.289) 0:06:04.126 ********* 2025-05-14 02:38:13.241006 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.241011 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.241015 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.241020 | orchestrator | 2025-05-14 02:38:13.241025 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-14 02:38:13.241032 | orchestrator | Wednesday 14 May 2025 02:30:57 +0000 (0:00:00.434) 0:06:04.561 ********* 2025-05-14 02:38:13.241040 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.241047 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.241054 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.241061 | orchestrator | 2025-05-14 02:38:13.241068 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-14 02:38:13.241075 | orchestrator | Wednesday 14 May 2025 02:30:57 +0000 (0:00:00.309) 0:06:04.870 ********* 2025-05-14 02:38:13.241083 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.241090 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.241105 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.241112 | orchestrator | 2025-05-14 02:38:13.241119 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-14 02:38:13.241127 | orchestrator | Wednesday 14 May 2025 02:30:58 +0000 (0:00:00.674) 0:06:05.544 ********* 2025-05-14 02:38:13.241135 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.241142 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.241149 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.241157 | orchestrator | 2025-05-14 02:38:13.241164 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-14 02:38:13.241172 | orchestrator | Wednesday 14 May 2025 02:30:58 +0000 (0:00:00.324) 0:06:05.869 ********* 2025-05-14 02:38:13.241179 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.241187 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.241195 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.241202 | orchestrator | 2025-05-14 02:38:13.241209 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-14 02:38:13.241216 | orchestrator | Wednesday 14 May 2025 02:30:59 +0000 (0:00:00.577) 0:06:06.447 ********* 2025-05-14 02:38:13.241223 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.241230 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.241237 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.241244 | orchestrator | 2025-05-14 02:38:13.241250 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-14 02:38:13.241264 | orchestrator | Wednesday 14 May 2025 02:30:59 +0000 (0:00:00.348) 0:06:06.795 ********* 2025-05-14 02:38:13.241271 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.241278 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.241302 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.241309 | orchestrator | 2025-05-14 02:38:13.241316 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-14 02:38:13.241323 | orchestrator | Wednesday 14 May 2025 02:30:59 +0000 (0:00:00.297) 0:06:07.093 ********* 2025-05-14 02:38:13.241331 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.241337 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.241344 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.241351 | orchestrator | 2025-05-14 02:38:13.241358 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-14 02:38:13.241365 | orchestrator | Wednesday 14 May 2025 02:31:00 +0000 (0:00:00.279) 0:06:07.372 ********* 2025-05-14 02:38:13.241372 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.241379 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.241386 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.241393 | orchestrator | 2025-05-14 02:38:13.241401 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-14 02:38:13.241408 | orchestrator | Wednesday 14 May 2025 02:31:01 +0000 (0:00:00.976) 0:06:08.349 ********* 2025-05-14 02:38:13.241415 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.241422 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.241429 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.241436 | orchestrator | 2025-05-14 02:38:13.241443 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-14 02:38:13.241450 | orchestrator | Wednesday 14 May 2025 02:31:01 +0000 (0:00:00.319) 0:06:08.668 ********* 2025-05-14 02:38:13.241456 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.241463 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.241474 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.241481 | orchestrator | 2025-05-14 02:38:13.241488 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-14 02:38:13.241495 | orchestrator | Wednesday 14 May 2025 02:31:01 +0000 (0:00:00.430) 0:06:09.099 ********* 2025-05-14 02:38:13.241501 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.241508 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.241515 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.241522 | orchestrator | 2025-05-14 02:38:13.241529 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-14 02:38:13.241536 | orchestrator | Wednesday 14 May 2025 02:31:02 +0000 (0:00:00.348) 0:06:09.447 ********* 2025-05-14 02:38:13.241543 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.241550 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.241557 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.241564 | orchestrator | 2025-05-14 02:38:13.241571 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-14 02:38:13.241578 | orchestrator | Wednesday 14 May 2025 02:31:02 +0000 (0:00:00.487) 0:06:09.934 ********* 2025-05-14 02:38:13.241605 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.241613 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.241620 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.241627 | orchestrator | 2025-05-14 02:38:13.241634 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-14 02:38:13.241641 | orchestrator | Wednesday 14 May 2025 02:31:02 +0000 (0:00:00.328) 0:06:10.263 ********* 2025-05-14 02:38:13.241648 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.241654 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.241661 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.241668 | orchestrator | 2025-05-14 02:38:13.241676 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-14 02:38:13.241683 | orchestrator | Wednesday 14 May 2025 02:31:03 +0000 (0:00:00.311) 0:06:10.575 ********* 2025-05-14 02:38:13.241690 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.241697 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.241704 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.241711 | orchestrator | 2025-05-14 02:38:13.241722 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-14 02:38:13.241730 | orchestrator | Wednesday 14 May 2025 02:31:03 +0000 (0:00:00.379) 0:06:10.955 ********* 2025-05-14 02:38:13.241737 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.241745 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.241752 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.241760 | orchestrator | 2025-05-14 02:38:13.241767 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-14 02:38:13.241775 | orchestrator | Wednesday 14 May 2025 02:31:04 +0000 (0:00:00.711) 0:06:11.666 ********* 2025-05-14 02:38:13.241782 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.241791 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.241798 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.241805 | orchestrator | 2025-05-14 02:38:13.241814 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-14 02:38:13.241819 | orchestrator | Wednesday 14 May 2025 02:31:04 +0000 (0:00:00.342) 0:06:12.009 ********* 2025-05-14 02:38:13.241823 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.241828 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.241832 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.241837 | orchestrator | 2025-05-14 02:38:13.241841 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-14 02:38:13.241845 | orchestrator | Wednesday 14 May 2025 02:31:05 +0000 (0:00:00.355) 0:06:12.364 ********* 2025-05-14 02:38:13.241850 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.241854 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.241859 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.241863 | orchestrator | 2025-05-14 02:38:13.241868 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-14 02:38:13.241872 | orchestrator | Wednesday 14 May 2025 02:31:05 +0000 (0:00:00.387) 0:06:12.751 ********* 2025-05-14 02:38:13.241877 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.241881 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.241886 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.241890 | orchestrator | 2025-05-14 02:38:13.241894 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-14 02:38:13.241899 | orchestrator | Wednesday 14 May 2025 02:31:06 +0000 (0:00:00.652) 0:06:13.404 ********* 2025-05-14 02:38:13.241909 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.241914 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.241918 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.241923 | orchestrator | 2025-05-14 02:38:13.241927 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-14 02:38:13.241932 | orchestrator | Wednesday 14 May 2025 02:31:06 +0000 (0:00:00.359) 0:06:13.764 ********* 2025-05-14 02:38:13.241936 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.241941 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.241945 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.241950 | orchestrator | 2025-05-14 02:38:13.241954 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-14 02:38:13.241959 | orchestrator | Wednesday 14 May 2025 02:31:06 +0000 (0:00:00.371) 0:06:14.135 ********* 2025-05-14 02:38:13.241963 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.241968 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.241972 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.241976 | orchestrator | 2025-05-14 02:38:13.241981 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-14 02:38:13.241986 | orchestrator | Wednesday 14 May 2025 02:31:07 +0000 (0:00:00.335) 0:06:14.470 ********* 2025-05-14 02:38:13.241990 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.241994 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.241999 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.242003 | orchestrator | 2025-05-14 02:38:13.242008 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-14 02:38:13.242035 | orchestrator | Wednesday 14 May 2025 02:31:07 +0000 (0:00:00.632) 0:06:15.102 ********* 2025-05-14 02:38:13.242041 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.242049 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.242054 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.242058 | orchestrator | 2025-05-14 02:38:13.242063 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-14 02:38:13.242068 | orchestrator | Wednesday 14 May 2025 02:31:08 +0000 (0:00:00.362) 0:06:15.465 ********* 2025-05-14 02:38:13.242072 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.242076 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.242081 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.242085 | orchestrator | 2025-05-14 02:38:13.242090 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-14 02:38:13.242095 | orchestrator | Wednesday 14 May 2025 02:31:08 +0000 (0:00:00.409) 0:06:15.875 ********* 2025-05-14 02:38:13.242100 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.242104 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.242109 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.242113 | orchestrator | 2025-05-14 02:38:13.242118 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-14 02:38:13.242122 | orchestrator | Wednesday 14 May 2025 02:31:08 +0000 (0:00:00.324) 0:06:16.199 ********* 2025-05-14 02:38:13.242127 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.242131 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.242135 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.242140 | orchestrator | 2025-05-14 02:38:13.242144 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-14 02:38:13.242149 | orchestrator | Wednesday 14 May 2025 02:31:09 +0000 (0:00:00.614) 0:06:16.814 ********* 2025-05-14 02:38:13.242153 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.242158 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.242162 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.242167 | orchestrator | 2025-05-14 02:38:13.242171 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-14 02:38:13.242176 | orchestrator | Wednesday 14 May 2025 02:31:09 +0000 (0:00:00.378) 0:06:17.192 ********* 2025-05-14 02:38:13.242180 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-14 02:38:13.242185 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-14 02:38:13.242189 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.242194 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-14 02:38:13.242198 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-14 02:38:13.242202 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.242207 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-14 02:38:13.242211 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-14 02:38:13.242216 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.242220 | orchestrator | 2025-05-14 02:38:13.242225 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-14 02:38:13.242229 | orchestrator | Wednesday 14 May 2025 02:31:10 +0000 (0:00:00.436) 0:06:17.629 ********* 2025-05-14 02:38:13.242233 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-14 02:38:13.242238 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-14 02:38:13.242242 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.242247 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-14 02:38:13.242252 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-14 02:38:13.242256 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.242260 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-14 02:38:13.242265 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-14 02:38:13.242274 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.242278 | orchestrator | 2025-05-14 02:38:13.242283 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-14 02:38:13.242287 | orchestrator | Wednesday 14 May 2025 02:31:10 +0000 (0:00:00.367) 0:06:17.996 ********* 2025-05-14 02:38:13.242292 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.242296 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.242300 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.242305 | orchestrator | 2025-05-14 02:38:13.242310 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-14 02:38:13.242318 | orchestrator | Wednesday 14 May 2025 02:31:11 +0000 (0:00:00.635) 0:06:18.632 ********* 2025-05-14 02:38:13.242323 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.242327 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.242332 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.242336 | orchestrator | 2025-05-14 02:38:13.242341 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-14 02:38:13.242346 | orchestrator | Wednesday 14 May 2025 02:31:11 +0000 (0:00:00.376) 0:06:19.008 ********* 2025-05-14 02:38:13.242350 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.242355 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.242359 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.242363 | orchestrator | 2025-05-14 02:38:13.242368 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-14 02:38:13.242372 | orchestrator | Wednesday 14 May 2025 02:31:12 +0000 (0:00:00.368) 0:06:19.377 ********* 2025-05-14 02:38:13.242377 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.242381 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.242386 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.242390 | orchestrator | 2025-05-14 02:38:13.242395 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-14 02:38:13.242399 | orchestrator | Wednesday 14 May 2025 02:31:12 +0000 (0:00:00.353) 0:06:19.730 ********* 2025-05-14 02:38:13.242404 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.242408 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.242415 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.242423 | orchestrator | 2025-05-14 02:38:13.242430 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-14 02:38:13.242444 | orchestrator | Wednesday 14 May 2025 02:31:13 +0000 (0:00:00.612) 0:06:20.343 ********* 2025-05-14 02:38:13.242452 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.242459 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.242466 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.242474 | orchestrator | 2025-05-14 02:38:13.242481 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-14 02:38:13.242489 | orchestrator | Wednesday 14 May 2025 02:31:13 +0000 (0:00:00.360) 0:06:20.703 ********* 2025-05-14 02:38:13.242496 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 02:38:13.242503 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 02:38:13.242511 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 02:38:13.242518 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.242525 | orchestrator | 2025-05-14 02:38:13.242533 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-14 02:38:13.242540 | orchestrator | Wednesday 14 May 2025 02:31:13 +0000 (0:00:00.421) 0:06:21.125 ********* 2025-05-14 02:38:13.242547 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 02:38:13.242555 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 02:38:13.242563 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 02:38:13.242571 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.242578 | orchestrator | 2025-05-14 02:38:13.242604 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-14 02:38:13.242611 | orchestrator | Wednesday 14 May 2025 02:31:14 +0000 (0:00:00.438) 0:06:21.564 ********* 2025-05-14 02:38:13.242619 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 02:38:13.242626 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 02:38:13.242633 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 02:38:13.242640 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.242647 | orchestrator | 2025-05-14 02:38:13.242654 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 02:38:13.242661 | orchestrator | Wednesday 14 May 2025 02:31:14 +0000 (0:00:00.421) 0:06:21.985 ********* 2025-05-14 02:38:13.242668 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.242676 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.242683 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.242691 | orchestrator | 2025-05-14 02:38:13.242698 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-14 02:38:13.242705 | orchestrator | Wednesday 14 May 2025 02:31:15 +0000 (0:00:00.612) 0:06:22.598 ********* 2025-05-14 02:38:13.242737 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-14 02:38:13.242744 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.242751 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-14 02:38:13.242758 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.242765 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-14 02:38:13.242779 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.242786 | orchestrator | 2025-05-14 02:38:13.242793 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-14 02:38:13.242800 | orchestrator | Wednesday 14 May 2025 02:31:15 +0000 (0:00:00.483) 0:06:23.082 ********* 2025-05-14 02:38:13.242807 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.242814 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.242821 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.242828 | orchestrator | 2025-05-14 02:38:13.242835 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 02:38:13.242842 | orchestrator | Wednesday 14 May 2025 02:31:16 +0000 (0:00:00.359) 0:06:23.441 ********* 2025-05-14 02:38:13.242849 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.242856 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.242863 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.242871 | orchestrator | 2025-05-14 02:38:13.242878 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-14 02:38:13.242885 | orchestrator | Wednesday 14 May 2025 02:31:16 +0000 (0:00:00.341) 0:06:23.782 ********* 2025-05-14 02:38:13.242892 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-14 02:38:13.242899 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.242906 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-14 02:38:13.242933 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.242945 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-14 02:38:13.242952 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.242959 | orchestrator | 2025-05-14 02:38:13.242966 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-14 02:38:13.242973 | orchestrator | Wednesday 14 May 2025 02:31:17 +0000 (0:00:01.131) 0:06:24.913 ********* 2025-05-14 02:38:13.242980 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.242987 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.242994 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.243001 | orchestrator | 2025-05-14 02:38:13.243008 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-14 02:38:13.243015 | orchestrator | Wednesday 14 May 2025 02:31:18 +0000 (0:00:00.359) 0:06:25.273 ********* 2025-05-14 02:38:13.243023 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 02:38:13.243034 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 02:38:13.243043 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 02:38:13.243050 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-14 02:38:13.243058 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-14 02:38:13.243065 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-14 02:38:13.243073 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.243078 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.243082 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-14 02:38:13.243087 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-14 02:38:13.243094 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-14 02:38:13.243099 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.243103 | orchestrator | 2025-05-14 02:38:13.243108 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-14 02:38:13.243112 | orchestrator | Wednesday 14 May 2025 02:31:18 +0000 (0:00:00.682) 0:06:25.955 ********* 2025-05-14 02:38:13.243117 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.243121 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.243126 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.243130 | orchestrator | 2025-05-14 02:38:13.243134 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-14 02:38:13.243139 | orchestrator | Wednesday 14 May 2025 02:31:19 +0000 (0:00:00.872) 0:06:26.828 ********* 2025-05-14 02:38:13.243143 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.243148 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.243152 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.243157 | orchestrator | 2025-05-14 02:38:13.243161 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-14 02:38:13.243166 | orchestrator | Wednesday 14 May 2025 02:31:20 +0000 (0:00:00.596) 0:06:27.425 ********* 2025-05-14 02:38:13.243170 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.243175 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.243179 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.243183 | orchestrator | 2025-05-14 02:38:13.243188 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-14 02:38:13.243193 | orchestrator | Wednesday 14 May 2025 02:31:21 +0000 (0:00:00.910) 0:06:28.336 ********* 2025-05-14 02:38:13.243197 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.243202 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.243206 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.243210 | orchestrator | 2025-05-14 02:38:13.243215 | orchestrator | TASK [ceph-mgr : set_fact container_exec_cmd] ********************************** 2025-05-14 02:38:13.243219 | orchestrator | Wednesday 14 May 2025 02:31:21 +0000 (0:00:00.573) 0:06:28.909 ********* 2025-05-14 02:38:13.243224 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 02:38:13.243228 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-14 02:38:13.243233 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-14 02:38:13.243237 | orchestrator | 2025-05-14 02:38:13.243242 | orchestrator | TASK [ceph-mgr : include common.yml] ******************************************* 2025-05-14 02:38:13.243246 | orchestrator | Wednesday 14 May 2025 02:31:22 +0000 (0:00:00.977) 0:06:29.887 ********* 2025-05-14 02:38:13.243251 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:38:13.243255 | orchestrator | 2025-05-14 02:38:13.243259 | orchestrator | TASK [ceph-mgr : create mgr directory] ***************************************** 2025-05-14 02:38:13.243264 | orchestrator | Wednesday 14 May 2025 02:31:23 +0000 (0:00:00.840) 0:06:30.727 ********* 2025-05-14 02:38:13.243268 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:38:13.243273 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:38:13.243283 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:38:13.243288 | orchestrator | 2025-05-14 02:38:13.243292 | orchestrator | TASK [ceph-mgr : fetch ceph mgr keyring] *************************************** 2025-05-14 02:38:13.243297 | orchestrator | Wednesday 14 May 2025 02:31:24 +0000 (0:00:00.691) 0:06:31.418 ********* 2025-05-14 02:38:13.243301 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.243306 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.243310 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.243314 | orchestrator | 2025-05-14 02:38:13.243319 | orchestrator | TASK [ceph-mgr : create ceph mgr keyring(s) on a mon node] ********************* 2025-05-14 02:38:13.243323 | orchestrator | Wednesday 14 May 2025 02:31:24 +0000 (0:00:00.587) 0:06:32.006 ********* 2025-05-14 02:38:13.243328 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-14 02:38:13.243332 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-14 02:38:13.243337 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-14 02:38:13.243341 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-05-14 02:38:13.243345 | orchestrator | 2025-05-14 02:38:13.243350 | orchestrator | TASK [ceph-mgr : set_fact _mgr_keys] ******************************************* 2025-05-14 02:38:13.243354 | orchestrator | Wednesday 14 May 2025 02:31:33 +0000 (0:00:08.384) 0:06:40.390 ********* 2025-05-14 02:38:13.243363 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.243368 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.243372 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.243377 | orchestrator | 2025-05-14 02:38:13.243382 | orchestrator | TASK [ceph-mgr : get keys from monitors] *************************************** 2025-05-14 02:38:13.243386 | orchestrator | Wednesday 14 May 2025 02:31:33 +0000 (0:00:00.375) 0:06:40.766 ********* 2025-05-14 02:38:13.243390 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-14 02:38:13.243395 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-14 02:38:13.243399 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-14 02:38:13.243404 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-05-14 02:38:13.243408 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 02:38:13.243413 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 02:38:13.243417 | orchestrator | 2025-05-14 02:38:13.243422 | orchestrator | TASK [ceph-mgr : copy ceph key(s) if needed] *********************************** 2025-05-14 02:38:13.243426 | orchestrator | Wednesday 14 May 2025 02:31:35 +0000 (0:00:02.030) 0:06:42.797 ********* 2025-05-14 02:38:13.243431 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-14 02:38:13.243435 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-14 02:38:13.243440 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-14 02:38:13.243444 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-14 02:38:13.243448 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-05-14 02:38:13.243453 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-05-14 02:38:13.243457 | orchestrator | 2025-05-14 02:38:13.243464 | orchestrator | TASK [ceph-mgr : set mgr key permissions] ************************************** 2025-05-14 02:38:13.243469 | orchestrator | Wednesday 14 May 2025 02:31:36 +0000 (0:00:01.147) 0:06:43.944 ********* 2025-05-14 02:38:13.243473 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.243478 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.243482 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.243486 | orchestrator | 2025-05-14 02:38:13.243491 | orchestrator | TASK [ceph-mgr : append dashboard modules to ceph_mgr_modules] ***************** 2025-05-14 02:38:13.243495 | orchestrator | Wednesday 14 May 2025 02:31:37 +0000 (0:00:00.621) 0:06:44.565 ********* 2025-05-14 02:38:13.243500 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.243504 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.243508 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.243513 | orchestrator | 2025-05-14 02:38:13.243517 | orchestrator | TASK [ceph-mgr : include pre_requisite.yml] ************************************ 2025-05-14 02:38:13.243525 | orchestrator | Wednesday 14 May 2025 02:31:37 +0000 (0:00:00.480) 0:06:45.046 ********* 2025-05-14 02:38:13.243530 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.243534 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.243539 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.243543 | orchestrator | 2025-05-14 02:38:13.243548 | orchestrator | TASK [ceph-mgr : include start_mgr.yml] **************************************** 2025-05-14 02:38:13.243552 | orchestrator | Wednesday 14 May 2025 02:31:38 +0000 (0:00:00.331) 0:06:45.377 ********* 2025-05-14 02:38:13.243557 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:38:13.243562 | orchestrator | 2025-05-14 02:38:13.243566 | orchestrator | TASK [ceph-mgr : ensure systemd service override directory exists] ************* 2025-05-14 02:38:13.243570 | orchestrator | Wednesday 14 May 2025 02:31:38 +0000 (0:00:00.544) 0:06:45.922 ********* 2025-05-14 02:38:13.243575 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.243579 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.243584 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.243626 | orchestrator | 2025-05-14 02:38:13.243632 | orchestrator | TASK [ceph-mgr : add ceph-mgr systemd service overrides] *********************** 2025-05-14 02:38:13.243636 | orchestrator | Wednesday 14 May 2025 02:31:39 +0000 (0:00:00.521) 0:06:46.444 ********* 2025-05-14 02:38:13.243642 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.243649 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.243657 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.243662 | orchestrator | 2025-05-14 02:38:13.243666 | orchestrator | TASK [ceph-mgr : include_tasks systemd.yml] ************************************ 2025-05-14 02:38:13.243671 | orchestrator | Wednesday 14 May 2025 02:31:39 +0000 (0:00:00.348) 0:06:46.792 ********* 2025-05-14 02:38:13.243676 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:38:13.243683 | orchestrator | 2025-05-14 02:38:13.243691 | orchestrator | TASK [ceph-mgr : generate systemd unit file] *********************************** 2025-05-14 02:38:13.243698 | orchestrator | Wednesday 14 May 2025 02:31:40 +0000 (0:00:00.549) 0:06:47.342 ********* 2025-05-14 02:38:13.243706 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:38:13.243713 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:38:13.243720 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:38:13.243728 | orchestrator | 2025-05-14 02:38:13.243734 | orchestrator | TASK [ceph-mgr : generate systemd ceph-mgr target file] ************************ 2025-05-14 02:38:13.243739 | orchestrator | Wednesday 14 May 2025 02:31:41 +0000 (0:00:01.406) 0:06:48.749 ********* 2025-05-14 02:38:13.243743 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:38:13.243748 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:38:13.243752 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:38:13.243757 | orchestrator | 2025-05-14 02:38:13.243761 | orchestrator | TASK [ceph-mgr : enable ceph-mgr.target] *************************************** 2025-05-14 02:38:13.243766 | orchestrator | Wednesday 14 May 2025 02:31:42 +0000 (0:00:01.151) 0:06:49.900 ********* 2025-05-14 02:38:13.243770 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:38:13.243775 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:38:13.243779 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:38:13.243784 | orchestrator | 2025-05-14 02:38:13.243788 | orchestrator | TASK [ceph-mgr : systemd start mgr] ******************************************** 2025-05-14 02:38:13.243792 | orchestrator | Wednesday 14 May 2025 02:31:44 +0000 (0:00:01.652) 0:06:51.553 ********* 2025-05-14 02:38:13.243797 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:38:13.243801 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:38:13.243810 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:38:13.243815 | orchestrator | 2025-05-14 02:38:13.243819 | orchestrator | TASK [ceph-mgr : include mgr_modules.yml] ************************************** 2025-05-14 02:38:13.243824 | orchestrator | Wednesday 14 May 2025 02:31:46 +0000 (0:00:02.197) 0:06:53.750 ********* 2025-05-14 02:38:13.243828 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.243837 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.243842 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-05-14 02:38:13.243846 | orchestrator | 2025-05-14 02:38:13.243851 | orchestrator | TASK [ceph-mgr : wait for all mgr to be up] ************************************ 2025-05-14 02:38:13.243855 | orchestrator | Wednesday 14 May 2025 02:31:47 +0000 (0:00:00.601) 0:06:54.352 ********* 2025-05-14 02:38:13.243859 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (30 retries left). 2025-05-14 02:38:13.243864 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (29 retries left). 2025-05-14 02:38:13.243869 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-05-14 02:38:13.243873 | orchestrator | 2025-05-14 02:38:13.243878 | orchestrator | TASK [ceph-mgr : get enabled modules from ceph-mgr] **************************** 2025-05-14 02:38:13.243882 | orchestrator | Wednesday 14 May 2025 02:32:00 +0000 (0:00:13.454) 0:07:07.806 ********* 2025-05-14 02:38:13.243886 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-05-14 02:38:13.243891 | orchestrator | 2025-05-14 02:38:13.243895 | orchestrator | TASK [ceph-mgr : set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-05-14 02:38:13.243903 | orchestrator | Wednesday 14 May 2025 02:32:02 +0000 (0:00:01.882) 0:07:09.688 ********* 2025-05-14 02:38:13.243908 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.243912 | orchestrator | 2025-05-14 02:38:13.243917 | orchestrator | TASK [ceph-mgr : set _disabled_ceph_mgr_modules fact] ************************** 2025-05-14 02:38:13.243921 | orchestrator | Wednesday 14 May 2025 02:32:02 +0000 (0:00:00.499) 0:07:10.187 ********* 2025-05-14 02:38:13.243925 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.243930 | orchestrator | 2025-05-14 02:38:13.243934 | orchestrator | TASK [ceph-mgr : disable ceph mgr enabled modules] ***************************** 2025-05-14 02:38:13.243939 | orchestrator | Wednesday 14 May 2025 02:32:03 +0000 (0:00:00.329) 0:07:10.517 ********* 2025-05-14 02:38:13.243943 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-05-14 02:38:13.243948 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-05-14 02:38:13.243952 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-05-14 02:38:13.243956 | orchestrator | 2025-05-14 02:38:13.243961 | orchestrator | TASK [ceph-mgr : add modules to ceph-mgr] ************************************** 2025-05-14 02:38:13.243965 | orchestrator | Wednesday 14 May 2025 02:32:09 +0000 (0:00:06.228) 0:07:16.745 ********* 2025-05-14 02:38:13.243970 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-05-14 02:38:13.243974 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-05-14 02:38:13.243978 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-05-14 02:38:13.243983 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-05-14 02:38:13.243987 | orchestrator | 2025-05-14 02:38:13.243992 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-14 02:38:13.243996 | orchestrator | Wednesday 14 May 2025 02:32:14 +0000 (0:00:04.927) 0:07:21.673 ********* 2025-05-14 02:38:13.244001 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:38:13.244005 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:38:13.244010 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:38:13.244014 | orchestrator | 2025-05-14 02:38:13.244019 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-05-14 02:38:13.244023 | orchestrator | Wednesday 14 May 2025 02:32:15 +0000 (0:00:00.770) 0:07:22.443 ********* 2025-05-14 02:38:13.244028 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:38:13.244032 | orchestrator | 2025-05-14 02:38:13.244037 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-05-14 02:38:13.244041 | orchestrator | Wednesday 14 May 2025 02:32:16 +0000 (0:00:00.869) 0:07:23.312 ********* 2025-05-14 02:38:13.244050 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.244055 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.244059 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.244064 | orchestrator | 2025-05-14 02:38:13.244068 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-14 02:38:13.244073 | orchestrator | Wednesday 14 May 2025 02:32:16 +0000 (0:00:00.421) 0:07:23.734 ********* 2025-05-14 02:38:13.244077 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:38:13.244082 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:38:13.244086 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:38:13.244091 | orchestrator | 2025-05-14 02:38:13.244095 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-05-14 02:38:13.244099 | orchestrator | Wednesday 14 May 2025 02:32:17 +0000 (0:00:01.252) 0:07:24.987 ********* 2025-05-14 02:38:13.244104 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-14 02:38:13.244108 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-14 02:38:13.244113 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-14 02:38:13.244118 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.244122 | orchestrator | 2025-05-14 02:38:13.244126 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-05-14 02:38:13.244131 | orchestrator | Wednesday 14 May 2025 02:32:18 +0000 (0:00:01.212) 0:07:26.199 ********* 2025-05-14 02:38:13.244138 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.244145 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.244152 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.244158 | orchestrator | 2025-05-14 02:38:13.244168 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-14 02:38:13.244175 | orchestrator | Wednesday 14 May 2025 02:32:19 +0000 (0:00:00.369) 0:07:26.569 ********* 2025-05-14 02:38:13.244182 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:38:13.244189 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:38:13.244196 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:38:13.244204 | orchestrator | 2025-05-14 02:38:13.244208 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-05-14 02:38:13.244213 | orchestrator | 2025-05-14 02:38:13.244217 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-14 02:38:13.244223 | orchestrator | Wednesday 14 May 2025 02:32:21 +0000 (0:00:02.202) 0:07:28.771 ********* 2025-05-14 02:38:13.244229 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:38:13.244236 | orchestrator | 2025-05-14 02:38:13.244243 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-14 02:38:13.244249 | orchestrator | Wednesday 14 May 2025 02:32:22 +0000 (0:00:00.802) 0:07:29.573 ********* 2025-05-14 02:38:13.244256 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.244263 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.244269 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.244276 | orchestrator | 2025-05-14 02:38:13.244283 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-14 02:38:13.244290 | orchestrator | Wednesday 14 May 2025 02:32:22 +0000 (0:00:00.329) 0:07:29.903 ********* 2025-05-14 02:38:13.244297 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.244304 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.244311 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.244315 | orchestrator | 2025-05-14 02:38:13.244322 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-14 02:38:13.244327 | orchestrator | Wednesday 14 May 2025 02:32:23 +0000 (0:00:00.720) 0:07:30.624 ********* 2025-05-14 02:38:13.244331 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.244335 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.244339 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.244343 | orchestrator | 2025-05-14 02:38:13.244351 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-14 02:38:13.244355 | orchestrator | Wednesday 14 May 2025 02:32:24 +0000 (0:00:01.050) 0:07:31.675 ********* 2025-05-14 02:38:13.244359 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.244363 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.244367 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.244371 | orchestrator | 2025-05-14 02:38:13.244375 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-14 02:38:13.244379 | orchestrator | Wednesday 14 May 2025 02:32:25 +0000 (0:00:00.768) 0:07:32.443 ********* 2025-05-14 02:38:13.244384 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.244387 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.244391 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.244395 | orchestrator | 2025-05-14 02:38:13.244400 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-14 02:38:13.244404 | orchestrator | Wednesday 14 May 2025 02:32:25 +0000 (0:00:00.343) 0:07:32.787 ********* 2025-05-14 02:38:13.244408 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.244412 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.244416 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.244420 | orchestrator | 2025-05-14 02:38:13.244424 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-14 02:38:13.244428 | orchestrator | Wednesday 14 May 2025 02:32:25 +0000 (0:00:00.296) 0:07:33.083 ********* 2025-05-14 02:38:13.244432 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.244436 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.244440 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.244444 | orchestrator | 2025-05-14 02:38:13.244448 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-14 02:38:13.244452 | orchestrator | Wednesday 14 May 2025 02:32:26 +0000 (0:00:00.639) 0:07:33.722 ********* 2025-05-14 02:38:13.244456 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.244460 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.244464 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.244468 | orchestrator | 2025-05-14 02:38:13.244472 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-14 02:38:13.244477 | orchestrator | Wednesday 14 May 2025 02:32:26 +0000 (0:00:00.332) 0:07:34.055 ********* 2025-05-14 02:38:13.244481 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.244485 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.244489 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.244493 | orchestrator | 2025-05-14 02:38:13.244497 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-14 02:38:13.244501 | orchestrator | Wednesday 14 May 2025 02:32:27 +0000 (0:00:00.337) 0:07:34.393 ********* 2025-05-14 02:38:13.244505 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.244509 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.244513 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.244517 | orchestrator | 2025-05-14 02:38:13.244521 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-14 02:38:13.244525 | orchestrator | Wednesday 14 May 2025 02:32:27 +0000 (0:00:00.320) 0:07:34.714 ********* 2025-05-14 02:38:13.244529 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.244533 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.244537 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.244541 | orchestrator | 2025-05-14 02:38:13.244545 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-14 02:38:13.244549 | orchestrator | Wednesday 14 May 2025 02:32:28 +0000 (0:00:01.213) 0:07:35.927 ********* 2025-05-14 02:38:13.244553 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.244557 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.244561 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.244565 | orchestrator | 2025-05-14 02:38:13.244569 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-14 02:38:13.244577 | orchestrator | Wednesday 14 May 2025 02:32:28 +0000 (0:00:00.332) 0:07:36.259 ********* 2025-05-14 02:38:13.244582 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.244602 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.244607 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.244611 | orchestrator | 2025-05-14 02:38:13.244628 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-14 02:38:13.244632 | orchestrator | Wednesday 14 May 2025 02:32:29 +0000 (0:00:00.355) 0:07:36.615 ********* 2025-05-14 02:38:13.244636 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.244640 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.244644 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.244648 | orchestrator | 2025-05-14 02:38:13.244653 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-14 02:38:13.244657 | orchestrator | Wednesday 14 May 2025 02:32:29 +0000 (0:00:00.341) 0:07:36.957 ********* 2025-05-14 02:38:13.244661 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.244665 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.244669 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.244673 | orchestrator | 2025-05-14 02:38:13.244677 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-14 02:38:13.244681 | orchestrator | Wednesday 14 May 2025 02:32:30 +0000 (0:00:00.648) 0:07:37.605 ********* 2025-05-14 02:38:13.244687 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.244694 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.244700 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.244707 | orchestrator | 2025-05-14 02:38:13.244713 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-14 02:38:13.244720 | orchestrator | Wednesday 14 May 2025 02:32:30 +0000 (0:00:00.371) 0:07:37.977 ********* 2025-05-14 02:38:13.244728 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.244734 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.244741 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.244748 | orchestrator | 2025-05-14 02:38:13.244755 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-14 02:38:13.244759 | orchestrator | Wednesday 14 May 2025 02:32:31 +0000 (0:00:00.302) 0:07:38.279 ********* 2025-05-14 02:38:13.244763 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.244767 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.244771 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.244775 | orchestrator | 2025-05-14 02:38:13.244779 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-14 02:38:13.244783 | orchestrator | Wednesday 14 May 2025 02:32:31 +0000 (0:00:00.331) 0:07:38.611 ********* 2025-05-14 02:38:13.244787 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.244791 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.244795 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.244799 | orchestrator | 2025-05-14 02:38:13.244803 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-14 02:38:13.244807 | orchestrator | Wednesday 14 May 2025 02:32:31 +0000 (0:00:00.642) 0:07:39.253 ********* 2025-05-14 02:38:13.244811 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.244816 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.244820 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.244824 | orchestrator | 2025-05-14 02:38:13.244828 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-14 02:38:13.244832 | orchestrator | Wednesday 14 May 2025 02:32:32 +0000 (0:00:00.418) 0:07:39.672 ********* 2025-05-14 02:38:13.244836 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.244840 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.244844 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.244848 | orchestrator | 2025-05-14 02:38:13.244852 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-14 02:38:13.244856 | orchestrator | Wednesday 14 May 2025 02:32:32 +0000 (0:00:00.360) 0:07:40.033 ********* 2025-05-14 02:38:13.244864 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.244869 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.244873 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.244877 | orchestrator | 2025-05-14 02:38:13.244881 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-14 02:38:13.244885 | orchestrator | Wednesday 14 May 2025 02:32:33 +0000 (0:00:00.345) 0:07:40.378 ********* 2025-05-14 02:38:13.244889 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.244893 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.244897 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.244901 | orchestrator | 2025-05-14 02:38:13.244905 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-14 02:38:13.244909 | orchestrator | Wednesday 14 May 2025 02:32:33 +0000 (0:00:00.627) 0:07:41.006 ********* 2025-05-14 02:38:13.244913 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.244917 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.244921 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.244925 | orchestrator | 2025-05-14 02:38:13.244929 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-14 02:38:13.244933 | orchestrator | Wednesday 14 May 2025 02:32:34 +0000 (0:00:00.336) 0:07:41.343 ********* 2025-05-14 02:38:13.244937 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.244941 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.244945 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.244949 | orchestrator | 2025-05-14 02:38:13.244953 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-14 02:38:13.244957 | orchestrator | Wednesday 14 May 2025 02:32:34 +0000 (0:00:00.349) 0:07:41.692 ********* 2025-05-14 02:38:13.244961 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.244965 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.244969 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.244973 | orchestrator | 2025-05-14 02:38:13.244977 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-14 02:38:13.244981 | orchestrator | Wednesday 14 May 2025 02:32:34 +0000 (0:00:00.318) 0:07:42.011 ********* 2025-05-14 02:38:13.244985 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.244989 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.244993 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.244997 | orchestrator | 2025-05-14 02:38:13.245001 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-14 02:38:13.245005 | orchestrator | Wednesday 14 May 2025 02:32:35 +0000 (0:00:00.676) 0:07:42.688 ********* 2025-05-14 02:38:13.245009 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.245017 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.245021 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.245025 | orchestrator | 2025-05-14 02:38:13.245029 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-14 02:38:13.245033 | orchestrator | Wednesday 14 May 2025 02:32:35 +0000 (0:00:00.350) 0:07:43.038 ********* 2025-05-14 02:38:13.245037 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.245041 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.245046 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.245049 | orchestrator | 2025-05-14 02:38:13.245053 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-14 02:38:13.245058 | orchestrator | Wednesday 14 May 2025 02:32:36 +0000 (0:00:00.371) 0:07:43.410 ********* 2025-05-14 02:38:13.245062 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.245066 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.245070 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.245074 | orchestrator | 2025-05-14 02:38:13.245078 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-14 02:38:13.245082 | orchestrator | Wednesday 14 May 2025 02:32:36 +0000 (0:00:00.340) 0:07:43.750 ********* 2025-05-14 02:38:13.245091 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.245095 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.245099 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.245103 | orchestrator | 2025-05-14 02:38:13.245107 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-14 02:38:13.245111 | orchestrator | Wednesday 14 May 2025 02:32:37 +0000 (0:00:00.632) 0:07:44.383 ********* 2025-05-14 02:38:13.245118 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.245122 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.245126 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.245130 | orchestrator | 2025-05-14 02:38:13.245134 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-14 02:38:13.245138 | orchestrator | Wednesday 14 May 2025 02:32:37 +0000 (0:00:00.342) 0:07:44.725 ********* 2025-05-14 02:38:13.245142 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-14 02:38:13.245146 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-14 02:38:13.245150 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-14 02:38:13.245154 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-14 02:38:13.245158 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.245162 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.245166 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-14 02:38:13.245170 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-14 02:38:13.245174 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.245178 | orchestrator | 2025-05-14 02:38:13.245182 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-14 02:38:13.245186 | orchestrator | Wednesday 14 May 2025 02:32:37 +0000 (0:00:00.434) 0:07:45.160 ********* 2025-05-14 02:38:13.245190 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-14 02:38:13.245194 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-14 02:38:13.245198 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-14 02:38:13.245202 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.245206 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-14 02:38:13.245210 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.245214 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-14 02:38:13.245218 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-14 02:38:13.245222 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.245226 | orchestrator | 2025-05-14 02:38:13.245230 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-14 02:38:13.245234 | orchestrator | Wednesday 14 May 2025 02:32:38 +0000 (0:00:00.382) 0:07:45.542 ********* 2025-05-14 02:38:13.245238 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.245242 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.245246 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.245250 | orchestrator | 2025-05-14 02:38:13.245254 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-14 02:38:13.245258 | orchestrator | Wednesday 14 May 2025 02:32:38 +0000 (0:00:00.640) 0:07:46.183 ********* 2025-05-14 02:38:13.245262 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.245266 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.245270 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.245274 | orchestrator | 2025-05-14 02:38:13.245278 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-14 02:38:13.245282 | orchestrator | Wednesday 14 May 2025 02:32:39 +0000 (0:00:00.368) 0:07:46.551 ********* 2025-05-14 02:38:13.245286 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.245290 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.245294 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.245302 | orchestrator | 2025-05-14 02:38:13.245306 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-14 02:38:13.245310 | orchestrator | Wednesday 14 May 2025 02:32:39 +0000 (0:00:00.342) 0:07:46.894 ********* 2025-05-14 02:38:13.245314 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.245318 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.245322 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.245326 | orchestrator | 2025-05-14 02:38:13.245330 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-14 02:38:13.245335 | orchestrator | Wednesday 14 May 2025 02:32:39 +0000 (0:00:00.333) 0:07:47.227 ********* 2025-05-14 02:38:13.245339 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.245342 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.245346 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.245350 | orchestrator | 2025-05-14 02:38:13.245354 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-14 02:38:13.245361 | orchestrator | Wednesday 14 May 2025 02:32:40 +0000 (0:00:00.801) 0:07:48.028 ********* 2025-05-14 02:38:13.245365 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.245370 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.245374 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.245378 | orchestrator | 2025-05-14 02:38:13.245382 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-14 02:38:13.245386 | orchestrator | Wednesday 14 May 2025 02:32:41 +0000 (0:00:00.336) 0:07:48.365 ********* 2025-05-14 02:38:13.245390 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:38:13.245394 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:38:13.245398 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:38:13.245402 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.245406 | orchestrator | 2025-05-14 02:38:13.245410 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-14 02:38:13.245414 | orchestrator | Wednesday 14 May 2025 02:32:41 +0000 (0:00:00.429) 0:07:48.795 ********* 2025-05-14 02:38:13.245418 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:38:13.245422 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:38:13.245426 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:38:13.245430 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.245434 | orchestrator | 2025-05-14 02:38:13.245438 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-14 02:38:13.245442 | orchestrator | Wednesday 14 May 2025 02:32:42 +0000 (0:00:00.526) 0:07:49.321 ********* 2025-05-14 02:38:13.245446 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:38:13.245452 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:38:13.245456 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:38:13.245460 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.245464 | orchestrator | 2025-05-14 02:38:13.245468 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 02:38:13.245472 | orchestrator | Wednesday 14 May 2025 02:32:42 +0000 (0:00:00.517) 0:07:49.838 ********* 2025-05-14 02:38:13.245476 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.245480 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.245484 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.245488 | orchestrator | 2025-05-14 02:38:13.245492 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-14 02:38:13.245496 | orchestrator | Wednesday 14 May 2025 02:32:43 +0000 (0:00:00.616) 0:07:50.454 ********* 2025-05-14 02:38:13.245500 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-14 02:38:13.245504 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.245508 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-14 02:38:13.245516 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.245520 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-14 02:38:13.245524 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.245528 | orchestrator | 2025-05-14 02:38:13.245532 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-14 02:38:13.245536 | orchestrator | Wednesday 14 May 2025 02:32:43 +0000 (0:00:00.479) 0:07:50.934 ********* 2025-05-14 02:38:13.245540 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.245544 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.245548 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.245552 | orchestrator | 2025-05-14 02:38:13.245556 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 02:38:13.245560 | orchestrator | Wednesday 14 May 2025 02:32:44 +0000 (0:00:00.340) 0:07:51.275 ********* 2025-05-14 02:38:13.245564 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.245568 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.245572 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.245576 | orchestrator | 2025-05-14 02:38:13.245580 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-14 02:38:13.245584 | orchestrator | Wednesday 14 May 2025 02:32:44 +0000 (0:00:00.360) 0:07:51.635 ********* 2025-05-14 02:38:13.245604 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-14 02:38:13.245609 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.245613 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-14 02:38:13.245617 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.245621 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-14 02:38:13.245625 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.245629 | orchestrator | 2025-05-14 02:38:13.245633 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-14 02:38:13.245637 | orchestrator | Wednesday 14 May 2025 02:32:45 +0000 (0:00:01.122) 0:07:52.757 ********* 2025-05-14 02:38:13.245641 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-14 02:38:13.245645 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.245649 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-14 02:38:13.245653 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.245657 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-14 02:38:13.245661 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.245665 | orchestrator | 2025-05-14 02:38:13.245669 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-14 02:38:13.245674 | orchestrator | Wednesday 14 May 2025 02:32:45 +0000 (0:00:00.395) 0:07:53.153 ********* 2025-05-14 02:38:13.245678 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:38:13.245682 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:38:13.245686 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:38:13.245693 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-14 02:38:13.245697 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-14 02:38:13.245701 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-14 02:38:13.245705 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.245709 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.245713 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-14 02:38:13.245717 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-14 02:38:13.245721 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-14 02:38:13.245725 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.245729 | orchestrator | 2025-05-14 02:38:13.245733 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-14 02:38:13.245740 | orchestrator | Wednesday 14 May 2025 02:32:46 +0000 (0:00:00.672) 0:07:53.825 ********* 2025-05-14 02:38:13.245744 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.245749 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.245756 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.245763 | orchestrator | 2025-05-14 02:38:13.245770 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-14 02:38:13.245776 | orchestrator | Wednesday 14 May 2025 02:32:47 +0000 (0:00:00.831) 0:07:54.657 ********* 2025-05-14 02:38:13.245783 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-14 02:38:13.245790 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.245796 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-14 02:38:13.245803 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.245810 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-14 02:38:13.245821 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.245828 | orchestrator | 2025-05-14 02:38:13.245834 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-14 02:38:13.245838 | orchestrator | Wednesday 14 May 2025 02:32:47 +0000 (0:00:00.599) 0:07:55.256 ********* 2025-05-14 02:38:13.245842 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.245846 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.245850 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.245854 | orchestrator | 2025-05-14 02:38:13.245859 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-14 02:38:13.245863 | orchestrator | Wednesday 14 May 2025 02:32:48 +0000 (0:00:00.848) 0:07:56.104 ********* 2025-05-14 02:38:13.245867 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.245871 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.245874 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.245878 | orchestrator | 2025-05-14 02:38:13.245882 | orchestrator | TASK [ceph-osd : set_fact add_osd] ********************************************* 2025-05-14 02:38:13.245886 | orchestrator | Wednesday 14 May 2025 02:32:49 +0000 (0:00:00.551) 0:07:56.656 ********* 2025-05-14 02:38:13.245890 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.245894 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.245898 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.245902 | orchestrator | 2025-05-14 02:38:13.245906 | orchestrator | TASK [ceph-osd : set_fact container_exec_cmd] ********************************** 2025-05-14 02:38:13.245910 | orchestrator | Wednesday 14 May 2025 02:32:50 +0000 (0:00:00.646) 0:07:57.302 ********* 2025-05-14 02:38:13.245914 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-14 02:38:13.245918 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-14 02:38:13.245922 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-14 02:38:13.245926 | orchestrator | 2025-05-14 02:38:13.245930 | orchestrator | TASK [ceph-osd : include_tasks system_tuning.yml] ****************************** 2025-05-14 02:38:13.245934 | orchestrator | Wednesday 14 May 2025 02:32:50 +0000 (0:00:00.762) 0:07:58.065 ********* 2025-05-14 02:38:13.245938 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:38:13.245942 | orchestrator | 2025-05-14 02:38:13.245946 | orchestrator | TASK [ceph-osd : disable osd directory parsing by updatedb] ******************** 2025-05-14 02:38:13.245950 | orchestrator | Wednesday 14 May 2025 02:32:51 +0000 (0:00:00.555) 0:07:58.620 ********* 2025-05-14 02:38:13.245954 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.245958 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.245963 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.245967 | orchestrator | 2025-05-14 02:38:13.245971 | orchestrator | TASK [ceph-osd : disable osd directory path in updatedb.conf] ****************** 2025-05-14 02:38:13.245975 | orchestrator | Wednesday 14 May 2025 02:32:51 +0000 (0:00:00.298) 0:07:58.919 ********* 2025-05-14 02:38:13.245983 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.245987 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.245991 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.245995 | orchestrator | 2025-05-14 02:38:13.245999 | orchestrator | TASK [ceph-osd : create tmpfiles.d directory] ********************************** 2025-05-14 02:38:13.246003 | orchestrator | Wednesday 14 May 2025 02:32:52 +0000 (0:00:00.599) 0:07:59.518 ********* 2025-05-14 02:38:13.246007 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.246011 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.246205 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.246211 | orchestrator | 2025-05-14 02:38:13.246215 | orchestrator | TASK [ceph-osd : disable transparent hugepage] ********************************* 2025-05-14 02:38:13.246219 | orchestrator | Wednesday 14 May 2025 02:32:52 +0000 (0:00:00.348) 0:07:59.868 ********* 2025-05-14 02:38:13.246223 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.246227 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.246231 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.246236 | orchestrator | 2025-05-14 02:38:13.246240 | orchestrator | TASK [ceph-osd : get default vm.min_free_kbytes] ******************************* 2025-05-14 02:38:13.246244 | orchestrator | Wednesday 14 May 2025 02:32:52 +0000 (0:00:00.306) 0:08:00.174 ********* 2025-05-14 02:38:13.246248 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.246252 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.246256 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.246260 | orchestrator | 2025-05-14 02:38:13.246314 | orchestrator | TASK [ceph-osd : set_fact vm_min_free_kbytes] ********************************** 2025-05-14 02:38:13.246321 | orchestrator | Wednesday 14 May 2025 02:32:53 +0000 (0:00:00.695) 0:08:00.870 ********* 2025-05-14 02:38:13.246325 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.246329 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.246333 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.246337 | orchestrator | 2025-05-14 02:38:13.246341 | orchestrator | TASK [ceph-osd : apply operating system tuning] ******************************** 2025-05-14 02:38:13.246345 | orchestrator | Wednesday 14 May 2025 02:32:54 +0000 (0:00:00.651) 0:08:01.521 ********* 2025-05-14 02:38:13.246349 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-14 02:38:13.246353 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-14 02:38:13.246357 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-14 02:38:13.246361 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-14 02:38:13.246365 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-14 02:38:13.246369 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-14 02:38:13.246373 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-14 02:38:13.246377 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-14 02:38:13.246381 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-14 02:38:13.246385 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-14 02:38:13.246389 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-14 02:38:13.246393 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-14 02:38:13.246398 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-14 02:38:13.246402 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-14 02:38:13.246406 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-14 02:38:13.246415 | orchestrator | 2025-05-14 02:38:13.246420 | orchestrator | TASK [ceph-osd : install dependencies] ***************************************** 2025-05-14 02:38:13.246424 | orchestrator | Wednesday 14 May 2025 02:32:56 +0000 (0:00:02.151) 0:08:03.672 ********* 2025-05-14 02:38:13.246428 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.246432 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.246436 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.246440 | orchestrator | 2025-05-14 02:38:13.246444 | orchestrator | TASK [ceph-osd : include_tasks common.yml] ************************************* 2025-05-14 02:38:13.246448 | orchestrator | Wednesday 14 May 2025 02:32:56 +0000 (0:00:00.261) 0:08:03.934 ********* 2025-05-14 02:38:13.246452 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:38:13.246456 | orchestrator | 2025-05-14 02:38:13.246460 | orchestrator | TASK [ceph-osd : create bootstrap-osd and osd directories] ********************* 2025-05-14 02:38:13.246464 | orchestrator | Wednesday 14 May 2025 02:32:57 +0000 (0:00:00.681) 0:08:04.616 ********* 2025-05-14 02:38:13.246468 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-14 02:38:13.246472 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-14 02:38:13.246476 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-14 02:38:13.246480 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-05-14 02:38:13.246484 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-05-14 02:38:13.246488 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-05-14 02:38:13.246492 | orchestrator | 2025-05-14 02:38:13.246496 | orchestrator | TASK [ceph-osd : get keys from monitors] *************************************** 2025-05-14 02:38:13.246500 | orchestrator | Wednesday 14 May 2025 02:32:58 +0000 (0:00:00.943) 0:08:05.559 ********* 2025-05-14 02:38:13.246504 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 02:38:13.246508 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-14 02:38:13.246512 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-14 02:38:13.246516 | orchestrator | 2025-05-14 02:38:13.246520 | orchestrator | TASK [ceph-osd : copy ceph key(s) if needed] *********************************** 2025-05-14 02:38:13.246524 | orchestrator | Wednesday 14 May 2025 02:33:00 +0000 (0:00:01.783) 0:08:07.342 ********* 2025-05-14 02:38:13.246528 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-14 02:38:13.246532 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-14 02:38:13.246536 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:38:13.246541 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-14 02:38:13.246545 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-14 02:38:13.246549 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:38:13.246553 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-14 02:38:13.246557 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-14 02:38:13.246561 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:38:13.246565 | orchestrator | 2025-05-14 02:38:13.246569 | orchestrator | TASK [ceph-osd : set noup flag] ************************************************ 2025-05-14 02:38:13.246573 | orchestrator | Wednesday 14 May 2025 02:33:01 +0000 (0:00:01.146) 0:08:08.489 ********* 2025-05-14 02:38:13.246624 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-14 02:38:13.246630 | orchestrator | 2025-05-14 02:38:13.246634 | orchestrator | TASK [ceph-osd : include container_options_facts.yml] ************************** 2025-05-14 02:38:13.246638 | orchestrator | Wednesday 14 May 2025 02:33:03 +0000 (0:00:02.562) 0:08:11.051 ********* 2025-05-14 02:38:13.246642 | orchestrator | included: /ansible/roles/ceph-osd/tasks/container_options_facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:38:13.246646 | orchestrator | 2025-05-14 02:38:13.246650 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=0'] *** 2025-05-14 02:38:13.246658 | orchestrator | Wednesday 14 May 2025 02:33:04 +0000 (0:00:00.618) 0:08:11.670 ********* 2025-05-14 02:38:13.246662 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.246666 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.246670 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.246674 | orchestrator | 2025-05-14 02:38:13.246734 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=1'] *** 2025-05-14 02:38:13.246758 | orchestrator | Wednesday 14 May 2025 02:33:04 +0000 (0:00:00.531) 0:08:12.201 ********* 2025-05-14 02:38:13.246765 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.246771 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.246777 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.246783 | orchestrator | 2025-05-14 02:38:13.246789 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=0'] *** 2025-05-14 02:38:13.246795 | orchestrator | Wednesday 14 May 2025 02:33:05 +0000 (0:00:00.376) 0:08:12.578 ********* 2025-05-14 02:38:13.246802 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.246812 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.246817 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.246823 | orchestrator | 2025-05-14 02:38:13.246829 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=1'] *** 2025-05-14 02:38:13.246834 | orchestrator | Wednesday 14 May 2025 02:33:05 +0000 (0:00:00.373) 0:08:12.952 ********* 2025-05-14 02:38:13.246840 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.246846 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.246852 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.246858 | orchestrator | 2025-05-14 02:38:13.246865 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm.yml] ****************************** 2025-05-14 02:38:13.246871 | orchestrator | Wednesday 14 May 2025 02:33:06 +0000 (0:00:00.311) 0:08:13.264 ********* 2025-05-14 02:38:13.246877 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:38:13.246883 | orchestrator | 2025-05-14 02:38:13.246890 | orchestrator | TASK [ceph-osd : use ceph-volume to create bluestore osds] ********************* 2025-05-14 02:38:13.246895 | orchestrator | Wednesday 14 May 2025 02:33:06 +0000 (0:00:00.849) 0:08:14.113 ********* 2025-05-14 02:38:13.246902 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-cb58592c-122c-52e3-870d-c9748cfaa53d', 'data_vg': 'ceph-cb58592c-122c-52e3-870d-c9748cfaa53d'}) 2025-05-14 02:38:13.246908 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-4aa0a295-50da-5a6e-9e1c-976797741e16', 'data_vg': 'ceph-4aa0a295-50da-5a6e-9e1c-976797741e16'}) 2025-05-14 02:38:13.246913 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-22852bcc-228b-503b-9f2d-d63325c20b67', 'data_vg': 'ceph-22852bcc-228b-503b-9f2d-d63325c20b67'}) 2025-05-14 02:38:13.246917 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b14ae20f-13fb-53c3-906d-34f9f68040ad', 'data_vg': 'ceph-b14ae20f-13fb-53c3-906d-34f9f68040ad'}) 2025-05-14 02:38:13.246920 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-19540cc4-3279-5090-817a-02eeffb19a16', 'data_vg': 'ceph-19540cc4-3279-5090-817a-02eeffb19a16'}) 2025-05-14 02:38:13.246924 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-fc7bdc9b-bbf6-5512-af7e-0ab125570579', 'data_vg': 'ceph-fc7bdc9b-bbf6-5512-af7e-0ab125570579'}) 2025-05-14 02:38:13.246928 | orchestrator | 2025-05-14 02:38:13.246932 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm-batch.yml] ************************ 2025-05-14 02:38:13.246935 | orchestrator | Wednesday 14 May 2025 02:33:47 +0000 (0:00:40.288) 0:08:54.402 ********* 2025-05-14 02:38:13.246939 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.246943 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.246948 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.246952 | orchestrator | 2025-05-14 02:38:13.246957 | orchestrator | TASK [ceph-osd : include_tasks start_osds.yml] ********************************* 2025-05-14 02:38:13.246967 | orchestrator | Wednesday 14 May 2025 02:33:47 +0000 (0:00:00.461) 0:08:54.863 ********* 2025-05-14 02:38:13.246972 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:38:13.246976 | orchestrator | 2025-05-14 02:38:13.246980 | orchestrator | TASK [ceph-osd : get osd ids] ************************************************** 2025-05-14 02:38:13.246985 | orchestrator | Wednesday 14 May 2025 02:33:48 +0000 (0:00:00.550) 0:08:55.414 ********* 2025-05-14 02:38:13.246989 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.246994 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.246998 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.247002 | orchestrator | 2025-05-14 02:38:13.247007 | orchestrator | TASK [ceph-osd : collect osd ids] ********************************************** 2025-05-14 02:38:13.247011 | orchestrator | Wednesday 14 May 2025 02:33:48 +0000 (0:00:00.641) 0:08:56.055 ********* 2025-05-14 02:38:13.247015 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:38:13.247020 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:38:13.247024 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:38:13.247028 | orchestrator | 2025-05-14 02:38:13.247055 | orchestrator | TASK [ceph-osd : include_tasks systemd.yml] ************************************ 2025-05-14 02:38:13.247060 | orchestrator | Wednesday 14 May 2025 02:33:50 +0000 (0:00:01.945) 0:08:58.001 ********* 2025-05-14 02:38:13.247064 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:38:13.247068 | orchestrator | 2025-05-14 02:38:13.247073 | orchestrator | TASK [ceph-osd : generate systemd unit file] *********************************** 2025-05-14 02:38:13.247077 | orchestrator | Wednesday 14 May 2025 02:33:51 +0000 (0:00:00.570) 0:08:58.571 ********* 2025-05-14 02:38:13.247081 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:38:13.247085 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:38:13.247090 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:38:13.247094 | orchestrator | 2025-05-14 02:38:13.247098 | orchestrator | TASK [ceph-osd : generate systemd ceph-osd target file] ************************ 2025-05-14 02:38:13.247102 | orchestrator | Wednesday 14 May 2025 02:33:52 +0000 (0:00:01.563) 0:09:00.135 ********* 2025-05-14 02:38:13.247107 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:38:13.247111 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:38:13.247115 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:38:13.247119 | orchestrator | 2025-05-14 02:38:13.247124 | orchestrator | TASK [ceph-osd : enable ceph-osd.target] *************************************** 2025-05-14 02:38:13.247128 | orchestrator | Wednesday 14 May 2025 02:33:54 +0000 (0:00:01.290) 0:09:01.426 ********* 2025-05-14 02:38:13.247132 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:38:13.247136 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:38:13.247140 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:38:13.247145 | orchestrator | 2025-05-14 02:38:13.247149 | orchestrator | TASK [ceph-osd : ensure systemd service override directory exists] ************* 2025-05-14 02:38:13.247156 | orchestrator | Wednesday 14 May 2025 02:33:55 +0000 (0:00:01.660) 0:09:03.087 ********* 2025-05-14 02:38:13.247160 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.247165 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.247169 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.247173 | orchestrator | 2025-05-14 02:38:13.247178 | orchestrator | TASK [ceph-osd : add ceph-osd systemd service overrides] *********************** 2025-05-14 02:38:13.247182 | orchestrator | Wednesday 14 May 2025 02:33:56 +0000 (0:00:00.365) 0:09:03.452 ********* 2025-05-14 02:38:13.247186 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.247190 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.247195 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.247199 | orchestrator | 2025-05-14 02:38:13.247203 | orchestrator | TASK [ceph-osd : ensure "/var/lib/ceph/osd/{{ cluster }}-{{ item }}" is present] *** 2025-05-14 02:38:13.247207 | orchestrator | Wednesday 14 May 2025 02:33:56 +0000 (0:00:00.622) 0:09:04.075 ********* 2025-05-14 02:38:13.247212 | orchestrator | ok: [testbed-node-3] => (item=1) 2025-05-14 02:38:13.247220 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-14 02:38:13.247224 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-05-14 02:38:13.247228 | orchestrator | ok: [testbed-node-3] => (item=3) 2025-05-14 02:38:13.247232 | orchestrator | ok: [testbed-node-4] => (item=5) 2025-05-14 02:38:13.247237 | orchestrator | ok: [testbed-node-5] => (item=4) 2025-05-14 02:38:13.247241 | orchestrator | 2025-05-14 02:38:13.247246 | orchestrator | TASK [ceph-osd : systemd start osd] ******************************************** 2025-05-14 02:38:13.247250 | orchestrator | Wednesday 14 May 2025 02:33:57 +0000 (0:00:01.003) 0:09:05.078 ********* 2025-05-14 02:38:13.247254 | orchestrator | changed: [testbed-node-3] => (item=1) 2025-05-14 02:38:13.247259 | orchestrator | changed: [testbed-node-4] => (item=0) 2025-05-14 02:38:13.247263 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-05-14 02:38:13.247267 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-05-14 02:38:13.247271 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-05-14 02:38:13.247276 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-05-14 02:38:13.247280 | orchestrator | 2025-05-14 02:38:13.247284 | orchestrator | TASK [ceph-osd : unset noup flag] ********************************************** 2025-05-14 02:38:13.247288 | orchestrator | Wednesday 14 May 2025 02:34:01 +0000 (0:00:03.530) 0:09:08.608 ********* 2025-05-14 02:38:13.247293 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.247297 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.247302 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-14 02:38:13.247306 | orchestrator | 2025-05-14 02:38:13.247309 | orchestrator | TASK [ceph-osd : wait for all osd to be up] ************************************ 2025-05-14 02:38:13.247313 | orchestrator | Wednesday 14 May 2025 02:34:03 +0000 (0:00:02.305) 0:09:10.915 ********* 2025-05-14 02:38:13.247317 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.247320 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.247324 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: wait for all osd to be up (60 retries left). 2025-05-14 02:38:13.247328 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-14 02:38:13.247332 | orchestrator | 2025-05-14 02:38:13.247335 | orchestrator | TASK [ceph-osd : include crush_rules.yml] ************************************** 2025-05-14 02:38:13.247339 | orchestrator | Wednesday 14 May 2025 02:34:16 +0000 (0:00:12.606) 0:09:23.521 ********* 2025-05-14 02:38:13.247343 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.247346 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.247350 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.247354 | orchestrator | 2025-05-14 02:38:13.247358 | orchestrator | TASK [ceph-osd : include openstack_config.yml] ********************************* 2025-05-14 02:38:13.247361 | orchestrator | Wednesday 14 May 2025 02:34:16 +0000 (0:00:00.442) 0:09:23.964 ********* 2025-05-14 02:38:13.247365 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.247369 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.247372 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.247376 | orchestrator | 2025-05-14 02:38:13.247380 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-14 02:38:13.247383 | orchestrator | Wednesday 14 May 2025 02:34:17 +0000 (0:00:01.179) 0:09:25.144 ********* 2025-05-14 02:38:13.247387 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:38:13.247391 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:38:13.247394 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:38:13.247398 | orchestrator | 2025-05-14 02:38:13.247402 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-05-14 02:38:13.247417 | orchestrator | Wednesday 14 May 2025 02:34:18 +0000 (0:00:00.716) 0:09:25.861 ********* 2025-05-14 02:38:13.247422 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:38:13.247425 | orchestrator | 2025-05-14 02:38:13.247429 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact trigger_restart] ********************** 2025-05-14 02:38:13.247437 | orchestrator | Wednesday 14 May 2025 02:34:19 +0000 (0:00:00.785) 0:09:26.646 ********* 2025-05-14 02:38:13.247440 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:38:13.247444 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:38:13.247448 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:38:13.247451 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.247455 | orchestrator | 2025-05-14 02:38:13.247459 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called before restart] ******** 2025-05-14 02:38:13.247463 | orchestrator | Wednesday 14 May 2025 02:34:19 +0000 (0:00:00.425) 0:09:27.071 ********* 2025-05-14 02:38:13.247466 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.247470 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.247474 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.247477 | orchestrator | 2025-05-14 02:38:13.247481 | orchestrator | RUNNING HANDLER [ceph-handler : unset noup flag] ******************************* 2025-05-14 02:38:13.247485 | orchestrator | Wednesday 14 May 2025 02:34:20 +0000 (0:00:00.302) 0:09:27.374 ********* 2025-05-14 02:38:13.247489 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.247492 | orchestrator | 2025-05-14 02:38:13.247496 | orchestrator | RUNNING HANDLER [ceph-handler : copy osd restart script] *********************** 2025-05-14 02:38:13.247502 | orchestrator | Wednesday 14 May 2025 02:34:20 +0000 (0:00:00.244) 0:09:27.619 ********* 2025-05-14 02:38:13.247506 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.247510 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.247513 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.247517 | orchestrator | 2025-05-14 02:38:13.247521 | orchestrator | RUNNING HANDLER [ceph-handler : get pool list] ********************************* 2025-05-14 02:38:13.247524 | orchestrator | Wednesday 14 May 2025 02:34:20 +0000 (0:00:00.600) 0:09:28.220 ********* 2025-05-14 02:38:13.247528 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.247532 | orchestrator | 2025-05-14 02:38:13.247536 | orchestrator | RUNNING HANDLER [ceph-handler : get balancer module status] ******************** 2025-05-14 02:38:13.247539 | orchestrator | Wednesday 14 May 2025 02:34:21 +0000 (0:00:00.239) 0:09:28.460 ********* 2025-05-14 02:38:13.247543 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.247547 | orchestrator | 2025-05-14 02:38:13.247550 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-05-14 02:38:13.247554 | orchestrator | Wednesday 14 May 2025 02:34:21 +0000 (0:00:00.299) 0:09:28.760 ********* 2025-05-14 02:38:13.247558 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.247561 | orchestrator | 2025-05-14 02:38:13.247565 | orchestrator | RUNNING HANDLER [ceph-handler : disable balancer] ****************************** 2025-05-14 02:38:13.247569 | orchestrator | Wednesday 14 May 2025 02:34:21 +0000 (0:00:00.121) 0:09:28.881 ********* 2025-05-14 02:38:13.247572 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.247576 | orchestrator | 2025-05-14 02:38:13.247580 | orchestrator | RUNNING HANDLER [ceph-handler : disable pg autoscale on pools] ***************** 2025-05-14 02:38:13.247583 | orchestrator | Wednesday 14 May 2025 02:34:21 +0000 (0:00:00.247) 0:09:29.129 ********* 2025-05-14 02:38:13.247604 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.247608 | orchestrator | 2025-05-14 02:38:13.247612 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph osds daemon(s)] ******************* 2025-05-14 02:38:13.247615 | orchestrator | Wednesday 14 May 2025 02:34:22 +0000 (0:00:00.243) 0:09:29.372 ********* 2025-05-14 02:38:13.247619 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:38:13.247623 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:38:13.247626 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:38:13.247630 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.247634 | orchestrator | 2025-05-14 02:38:13.247637 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called after restart] ********* 2025-05-14 02:38:13.247641 | orchestrator | Wednesday 14 May 2025 02:34:22 +0000 (0:00:00.457) 0:09:29.829 ********* 2025-05-14 02:38:13.247648 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.247652 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.247655 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.247659 | orchestrator | 2025-05-14 02:38:13.247663 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable pg autoscale on pools] *************** 2025-05-14 02:38:13.247667 | orchestrator | Wednesday 14 May 2025 02:34:22 +0000 (0:00:00.395) 0:09:30.225 ********* 2025-05-14 02:38:13.247670 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.247674 | orchestrator | 2025-05-14 02:38:13.247678 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable balancer] **************************** 2025-05-14 02:38:13.247681 | orchestrator | Wednesday 14 May 2025 02:34:23 +0000 (0:00:00.859) 0:09:31.084 ********* 2025-05-14 02:38:13.247685 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.247689 | orchestrator | 2025-05-14 02:38:13.247693 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-14 02:38:13.247696 | orchestrator | Wednesday 14 May 2025 02:34:24 +0000 (0:00:00.233) 0:09:31.318 ********* 2025-05-14 02:38:13.247700 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:38:13.247704 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:38:13.247707 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:38:13.247711 | orchestrator | 2025-05-14 02:38:13.247715 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-05-14 02:38:13.247718 | orchestrator | 2025-05-14 02:38:13.247722 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-14 02:38:13.247726 | orchestrator | Wednesday 14 May 2025 02:34:27 +0000 (0:00:03.045) 0:09:34.363 ********* 2025-05-14 02:38:13.247741 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:38:13.247746 | orchestrator | 2025-05-14 02:38:13.247749 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-14 02:38:13.247753 | orchestrator | Wednesday 14 May 2025 02:34:28 +0000 (0:00:01.540) 0:09:35.904 ********* 2025-05-14 02:38:13.247757 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.247761 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.247764 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.247768 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.247772 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.247776 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.247779 | orchestrator | 2025-05-14 02:38:13.247783 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-14 02:38:13.247787 | orchestrator | Wednesday 14 May 2025 02:34:29 +0000 (0:00:00.793) 0:09:36.698 ********* 2025-05-14 02:38:13.247790 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.247794 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.247798 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.247802 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.247805 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.247809 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.247813 | orchestrator | 2025-05-14 02:38:13.247817 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-14 02:38:13.247820 | orchestrator | Wednesday 14 May 2025 02:34:30 +0000 (0:00:01.256) 0:09:37.954 ********* 2025-05-14 02:38:13.247824 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.247828 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.247831 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.247835 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.247839 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.247842 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.247849 | orchestrator | 2025-05-14 02:38:13.247856 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-14 02:38:13.247859 | orchestrator | Wednesday 14 May 2025 02:34:31 +0000 (0:00:00.993) 0:09:38.947 ********* 2025-05-14 02:38:13.247863 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.247870 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.247874 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.247877 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.247881 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.247885 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.247888 | orchestrator | 2025-05-14 02:38:13.247892 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-14 02:38:13.247896 | orchestrator | Wednesday 14 May 2025 02:34:32 +0000 (0:00:01.252) 0:09:40.200 ********* 2025-05-14 02:38:13.247899 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.247903 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.247907 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.247910 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.247914 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.247918 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.247921 | orchestrator | 2025-05-14 02:38:13.247925 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-14 02:38:13.247929 | orchestrator | Wednesday 14 May 2025 02:34:33 +0000 (0:00:00.970) 0:09:41.170 ********* 2025-05-14 02:38:13.247933 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.247936 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.247940 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.247944 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.247947 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.247951 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.247955 | orchestrator | 2025-05-14 02:38:13.247958 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-14 02:38:13.247962 | orchestrator | Wednesday 14 May 2025 02:34:34 +0000 (0:00:00.618) 0:09:41.789 ********* 2025-05-14 02:38:13.247966 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.247969 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.247973 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.247977 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.247980 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.247984 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.247988 | orchestrator | 2025-05-14 02:38:13.247991 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-14 02:38:13.247995 | orchestrator | Wednesday 14 May 2025 02:34:35 +0000 (0:00:00.634) 0:09:42.424 ********* 2025-05-14 02:38:13.247999 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.248003 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.248006 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.248010 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.248013 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.248017 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.248021 | orchestrator | 2025-05-14 02:38:13.248025 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-14 02:38:13.248028 | orchestrator | Wednesday 14 May 2025 02:34:35 +0000 (0:00:00.766) 0:09:43.190 ********* 2025-05-14 02:38:13.248032 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.248036 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.248039 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.248043 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.248047 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.248050 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.248054 | orchestrator | 2025-05-14 02:38:13.248058 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-14 02:38:13.248062 | orchestrator | Wednesday 14 May 2025 02:34:36 +0000 (0:00:00.556) 0:09:43.746 ********* 2025-05-14 02:38:13.248065 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.248069 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.248073 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.248076 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.248083 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.248087 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.248090 | orchestrator | 2025-05-14 02:38:13.248094 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-14 02:38:13.248098 | orchestrator | Wednesday 14 May 2025 02:34:37 +0000 (0:00:00.775) 0:09:44.522 ********* 2025-05-14 02:38:13.248101 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.248105 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.248121 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.248125 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.248129 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.248133 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.248136 | orchestrator | 2025-05-14 02:38:13.248140 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-14 02:38:13.248144 | orchestrator | Wednesday 14 May 2025 02:34:38 +0000 (0:00:01.091) 0:09:45.613 ********* 2025-05-14 02:38:13.248148 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.248151 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.248155 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.248159 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.248163 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.248166 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.248170 | orchestrator | 2025-05-14 02:38:13.248174 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-14 02:38:13.248177 | orchestrator | Wednesday 14 May 2025 02:34:38 +0000 (0:00:00.579) 0:09:46.192 ********* 2025-05-14 02:38:13.248181 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.248185 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.248188 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.248192 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.248196 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.248199 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.248203 | orchestrator | 2025-05-14 02:38:13.248207 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-14 02:38:13.248210 | orchestrator | Wednesday 14 May 2025 02:34:39 +0000 (0:00:00.856) 0:09:47.049 ********* 2025-05-14 02:38:13.248214 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.248218 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.248222 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.248228 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.248231 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.248235 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.248239 | orchestrator | 2025-05-14 02:38:13.248242 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-14 02:38:13.248246 | orchestrator | Wednesday 14 May 2025 02:34:40 +0000 (0:00:00.659) 0:09:47.709 ********* 2025-05-14 02:38:13.248250 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.248253 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.248257 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.248261 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.248264 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.248268 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.248272 | orchestrator | 2025-05-14 02:38:13.248276 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-14 02:38:13.248279 | orchestrator | Wednesday 14 May 2025 02:34:41 +0000 (0:00:00.919) 0:09:48.628 ********* 2025-05-14 02:38:13.248283 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.248287 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.248290 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.248294 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.248298 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.248301 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.248305 | orchestrator | 2025-05-14 02:38:13.248309 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-14 02:38:13.248313 | orchestrator | Wednesday 14 May 2025 02:34:42 +0000 (0:00:00.648) 0:09:49.277 ********* 2025-05-14 02:38:13.248319 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.248323 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.248327 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.248331 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.248334 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.248338 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.248341 | orchestrator | 2025-05-14 02:38:13.248345 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-14 02:38:13.248360 | orchestrator | Wednesday 14 May 2025 02:34:42 +0000 (0:00:00.601) 0:09:49.879 ********* 2025-05-14 02:38:13.248364 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.248368 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.248371 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.248375 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.248378 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.248382 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.248386 | orchestrator | 2025-05-14 02:38:13.248389 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-14 02:38:13.248393 | orchestrator | Wednesday 14 May 2025 02:34:43 +0000 (0:00:00.883) 0:09:50.762 ********* 2025-05-14 02:38:13.248397 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.248400 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.248404 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.248408 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.248411 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.248415 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.248419 | orchestrator | 2025-05-14 02:38:13.248422 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-14 02:38:13.248426 | orchestrator | Wednesday 14 May 2025 02:34:44 +0000 (0:00:00.616) 0:09:51.378 ********* 2025-05-14 02:38:13.248430 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.248433 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.248437 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.248441 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.248444 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.248448 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.248452 | orchestrator | 2025-05-14 02:38:13.248455 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-14 02:38:13.248459 | orchestrator | Wednesday 14 May 2025 02:34:45 +0000 (0:00:00.975) 0:09:52.354 ********* 2025-05-14 02:38:13.248463 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.248466 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.248470 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.248480 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.248484 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.248488 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.248492 | orchestrator | 2025-05-14 02:38:13.248495 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-14 02:38:13.248499 | orchestrator | Wednesday 14 May 2025 02:34:45 +0000 (0:00:00.619) 0:09:52.974 ********* 2025-05-14 02:38:13.248515 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.248519 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.248523 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.248526 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.248530 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.248534 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.248537 | orchestrator | 2025-05-14 02:38:13.248541 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-14 02:38:13.248545 | orchestrator | Wednesday 14 May 2025 02:34:46 +0000 (0:00:00.943) 0:09:53.917 ********* 2025-05-14 02:38:13.248548 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.248552 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.248558 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.248562 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.248566 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.248569 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.248573 | orchestrator | 2025-05-14 02:38:13.248577 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-14 02:38:13.248580 | orchestrator | Wednesday 14 May 2025 02:34:47 +0000 (0:00:00.604) 0:09:54.522 ********* 2025-05-14 02:38:13.248584 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.248598 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.248601 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.248605 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.248609 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.248612 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.248616 | orchestrator | 2025-05-14 02:38:13.248620 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-14 02:38:13.248624 | orchestrator | Wednesday 14 May 2025 02:34:48 +0000 (0:00:00.938) 0:09:55.460 ********* 2025-05-14 02:38:13.248632 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.248636 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.248639 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.248643 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.248646 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.248650 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.248654 | orchestrator | 2025-05-14 02:38:13.248657 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-14 02:38:13.248661 | orchestrator | Wednesday 14 May 2025 02:34:48 +0000 (0:00:00.624) 0:09:56.085 ********* 2025-05-14 02:38:13.248665 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.248668 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.248672 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.248676 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.248679 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.248683 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.248686 | orchestrator | 2025-05-14 02:38:13.248690 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-14 02:38:13.248694 | orchestrator | Wednesday 14 May 2025 02:34:49 +0000 (0:00:00.922) 0:09:57.008 ********* 2025-05-14 02:38:13.248698 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.248702 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.248705 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.248709 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.248713 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.248716 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.248720 | orchestrator | 2025-05-14 02:38:13.248724 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-14 02:38:13.248727 | orchestrator | Wednesday 14 May 2025 02:34:50 +0000 (0:00:00.649) 0:09:57.657 ********* 2025-05-14 02:38:13.248731 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.248735 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.248738 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.248742 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.248746 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.248749 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.248753 | orchestrator | 2025-05-14 02:38:13.248757 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-14 02:38:13.248760 | orchestrator | Wednesday 14 May 2025 02:34:51 +0000 (0:00:01.035) 0:09:58.693 ********* 2025-05-14 02:38:13.248764 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.248768 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.248771 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.248775 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.248782 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.248785 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.248789 | orchestrator | 2025-05-14 02:38:13.248793 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-14 02:38:13.248796 | orchestrator | Wednesday 14 May 2025 02:34:52 +0000 (0:00:00.643) 0:09:59.336 ********* 2025-05-14 02:38:13.248800 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.248804 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.248807 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.248811 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.248814 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.248818 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.248822 | orchestrator | 2025-05-14 02:38:13.248825 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-14 02:38:13.248829 | orchestrator | Wednesday 14 May 2025 02:34:53 +0000 (0:00:01.062) 0:10:00.398 ********* 2025-05-14 02:38:13.248833 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.248836 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.248840 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.248844 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.248847 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.248851 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.248854 | orchestrator | 2025-05-14 02:38:13.248858 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-14 02:38:13.248862 | orchestrator | Wednesday 14 May 2025 02:34:53 +0000 (0:00:00.699) 0:10:01.098 ********* 2025-05-14 02:38:13.248866 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.248869 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.248873 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.248888 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.248892 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.248896 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.248900 | orchestrator | 2025-05-14 02:38:13.248903 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-14 02:38:13.248907 | orchestrator | Wednesday 14 May 2025 02:34:54 +0000 (0:00:01.151) 0:10:02.250 ********* 2025-05-14 02:38:13.248911 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-14 02:38:13.248914 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-14 02:38:13.248918 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.248922 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-14 02:38:13.248925 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-14 02:38:13.248929 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.248933 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-14 02:38:13.248936 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-14 02:38:13.248940 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.248943 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-14 02:38:13.248947 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-14 02:38:13.248950 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-14 02:38:13.248954 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-14 02:38:13.248958 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.248961 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.248965 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-14 02:38:13.248969 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-14 02:38:13.248972 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.248976 | orchestrator | 2025-05-14 02:38:13.248982 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-14 02:38:13.248986 | orchestrator | Wednesday 14 May 2025 02:34:55 +0000 (0:00:00.693) 0:10:02.944 ********* 2025-05-14 02:38:13.248989 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-14 02:38:13.248993 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-14 02:38:13.249000 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.249004 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-14 02:38:13.249007 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-14 02:38:13.249011 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.249015 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-14 02:38:13.249018 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-14 02:38:13.249022 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.249025 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-14 02:38:13.249029 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-14 02:38:13.249033 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.249037 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-14 02:38:13.249040 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-14 02:38:13.249044 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.249048 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-14 02:38:13.249051 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-14 02:38:13.249055 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.249059 | orchestrator | 2025-05-14 02:38:13.249062 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-14 02:38:13.249066 | orchestrator | Wednesday 14 May 2025 02:34:56 +0000 (0:00:00.905) 0:10:03.849 ********* 2025-05-14 02:38:13.249070 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.249073 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.249077 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.249081 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.249084 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.249088 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.249091 | orchestrator | 2025-05-14 02:38:13.249095 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-14 02:38:13.249099 | orchestrator | Wednesday 14 May 2025 02:34:57 +0000 (0:00:00.615) 0:10:04.465 ********* 2025-05-14 02:38:13.249103 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.249106 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.249110 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.249113 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.249117 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.249121 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.249124 | orchestrator | 2025-05-14 02:38:13.249128 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-14 02:38:13.249132 | orchestrator | Wednesday 14 May 2025 02:34:57 +0000 (0:00:00.748) 0:10:05.213 ********* 2025-05-14 02:38:13.249136 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.249139 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.249143 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.249147 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.249150 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.249154 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.249158 | orchestrator | 2025-05-14 02:38:13.249161 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-14 02:38:13.249165 | orchestrator | Wednesday 14 May 2025 02:34:58 +0000 (0:00:00.600) 0:10:05.814 ********* 2025-05-14 02:38:13.249169 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.249172 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.249176 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.249180 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.249183 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.249187 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.249193 | orchestrator | 2025-05-14 02:38:13.249197 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-14 02:38:13.249201 | orchestrator | Wednesday 14 May 2025 02:34:59 +0000 (0:00:00.793) 0:10:06.607 ********* 2025-05-14 02:38:13.249216 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.249220 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.249224 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.249227 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.249231 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.249234 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.249238 | orchestrator | 2025-05-14 02:38:13.249242 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-14 02:38:13.249245 | orchestrator | Wednesday 14 May 2025 02:34:59 +0000 (0:00:00.641) 0:10:07.249 ********* 2025-05-14 02:38:13.249249 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.249253 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.249256 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.249260 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.249264 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.249267 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.249271 | orchestrator | 2025-05-14 02:38:13.249274 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-14 02:38:13.249278 | orchestrator | Wednesday 14 May 2025 02:35:00 +0000 (0:00:00.998) 0:10:08.248 ********* 2025-05-14 02:38:13.249282 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 02:38:13.249285 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 02:38:13.249289 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 02:38:13.249293 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.249296 | orchestrator | 2025-05-14 02:38:13.249300 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-14 02:38:13.249304 | orchestrator | Wednesday 14 May 2025 02:35:01 +0000 (0:00:00.370) 0:10:08.618 ********* 2025-05-14 02:38:13.249312 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 02:38:13.249318 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 02:38:13.249326 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 02:38:13.249333 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.249343 | orchestrator | 2025-05-14 02:38:13.249351 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-14 02:38:13.249357 | orchestrator | Wednesday 14 May 2025 02:35:01 +0000 (0:00:00.430) 0:10:09.048 ********* 2025-05-14 02:38:13.249363 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 02:38:13.249369 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 02:38:13.249374 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 02:38:13.249380 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.249386 | orchestrator | 2025-05-14 02:38:13.249392 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 02:38:13.249398 | orchestrator | Wednesday 14 May 2025 02:35:02 +0000 (0:00:00.426) 0:10:09.475 ********* 2025-05-14 02:38:13.249404 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.249411 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.249415 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.249419 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.249422 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.249426 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.249430 | orchestrator | 2025-05-14 02:38:13.249433 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-14 02:38:13.249437 | orchestrator | Wednesday 14 May 2025 02:35:03 +0000 (0:00:00.888) 0:10:10.363 ********* 2025-05-14 02:38:13.249441 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-14 02:38:13.249444 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.249452 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-14 02:38:13.249456 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.249460 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-14 02:38:13.249463 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.249467 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-14 02:38:13.249471 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.249474 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-14 02:38:13.249478 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.249482 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-14 02:38:13.249485 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.249489 | orchestrator | 2025-05-14 02:38:13.249492 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-14 02:38:13.249496 | orchestrator | Wednesday 14 May 2025 02:35:03 +0000 (0:00:00.897) 0:10:11.261 ********* 2025-05-14 02:38:13.249500 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.249503 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.249507 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.249511 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.249514 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.249518 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.249522 | orchestrator | 2025-05-14 02:38:13.249525 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 02:38:13.249529 | orchestrator | Wednesday 14 May 2025 02:35:04 +0000 (0:00:00.771) 0:10:12.032 ********* 2025-05-14 02:38:13.249533 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.249536 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.249540 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.249544 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.249547 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.249551 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.249555 | orchestrator | 2025-05-14 02:38:13.249558 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-14 02:38:13.249562 | orchestrator | Wednesday 14 May 2025 02:35:05 +0000 (0:00:00.669) 0:10:12.702 ********* 2025-05-14 02:38:13.249566 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-14 02:38:13.249570 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-14 02:38:13.249573 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-14 02:38:13.249577 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.249580 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.249584 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-14 02:38:13.249616 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.249621 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.249624 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-14 02:38:13.249628 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.249632 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-14 02:38:13.249635 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.249639 | orchestrator | 2025-05-14 02:38:13.249643 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-14 02:38:13.249646 | orchestrator | Wednesday 14 May 2025 02:35:06 +0000 (0:00:01.507) 0:10:14.210 ********* 2025-05-14 02:38:13.249650 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.249654 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.249657 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.249661 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-14 02:38:13.249665 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.249669 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-14 02:38:13.249672 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.249679 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-14 02:38:13.249683 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.249689 | orchestrator | 2025-05-14 02:38:13.249696 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-14 02:38:13.249709 | orchestrator | Wednesday 14 May 2025 02:35:07 +0000 (0:00:00.680) 0:10:14.891 ********* 2025-05-14 02:38:13.249715 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-14 02:38:13.249721 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-14 02:38:13.249727 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-14 02:38:13.249734 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.249740 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-14 02:38:13.249745 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-14 02:38:13.249753 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-14 02:38:13.249757 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.249760 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-14 02:38:13.249764 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-14 02:38:13.249768 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-14 02:38:13.249771 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.249775 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:38:13.249779 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:38:13.249782 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:38:13.249786 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-14 02:38:13.249789 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-14 02:38:13.249793 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-14 02:38:13.249797 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.249800 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.249804 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-14 02:38:13.249807 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-14 02:38:13.249811 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-14 02:38:13.249815 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.249818 | orchestrator | 2025-05-14 02:38:13.249822 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-14 02:38:13.249825 | orchestrator | Wednesday 14 May 2025 02:35:09 +0000 (0:00:01.773) 0:10:16.664 ********* 2025-05-14 02:38:13.249829 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.249833 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.249836 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.249840 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.249844 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.249847 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.249851 | orchestrator | 2025-05-14 02:38:13.249854 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-14 02:38:13.249858 | orchestrator | Wednesday 14 May 2025 02:35:10 +0000 (0:00:01.500) 0:10:18.164 ********* 2025-05-14 02:38:13.249862 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.249865 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.249869 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.249873 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-14 02:38:13.249876 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.249880 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-14 02:38:13.249883 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.249887 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-14 02:38:13.249895 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.249898 | orchestrator | 2025-05-14 02:38:13.249902 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-14 02:38:13.249906 | orchestrator | Wednesday 14 May 2025 02:35:12 +0000 (0:00:01.499) 0:10:19.664 ********* 2025-05-14 02:38:13.249910 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.249913 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.249917 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.249920 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.249924 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.249928 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.249931 | orchestrator | 2025-05-14 02:38:13.249935 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-14 02:38:13.249952 | orchestrator | Wednesday 14 May 2025 02:35:14 +0000 (0:00:01.777) 0:10:21.442 ********* 2025-05-14 02:38:13.249957 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:13.249960 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:13.249964 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:13.249968 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.249971 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.249975 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.249979 | orchestrator | 2025-05-14 02:38:13.249983 | orchestrator | TASK [ceph-crash : create client.crash keyring] ******************************** 2025-05-14 02:38:13.249986 | orchestrator | Wednesday 14 May 2025 02:35:15 +0000 (0:00:01.243) 0:10:22.686 ********* 2025-05-14 02:38:13.249990 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:38:13.249994 | orchestrator | 2025-05-14 02:38:13.249997 | orchestrator | TASK [ceph-crash : get keys from monitors] ************************************* 2025-05-14 02:38:13.250001 | orchestrator | Wednesday 14 May 2025 02:35:18 +0000 (0:00:03.473) 0:10:26.159 ********* 2025-05-14 02:38:13.250005 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.250008 | orchestrator | 2025-05-14 02:38:13.250012 | orchestrator | TASK [ceph-crash : copy ceph key(s) if needed] ********************************* 2025-05-14 02:38:13.250043 | orchestrator | Wednesday 14 May 2025 02:35:20 +0000 (0:00:01.899) 0:10:28.058 ********* 2025-05-14 02:38:13.250047 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.250051 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:38:13.250055 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:38:13.250058 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:38:13.250062 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:38:13.250065 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:38:13.250069 | orchestrator | 2025-05-14 02:38:13.250073 | orchestrator | TASK [ceph-crash : create /var/lib/ceph/crash/posted] ************************** 2025-05-14 02:38:13.250080 | orchestrator | Wednesday 14 May 2025 02:35:22 +0000 (0:00:01.792) 0:10:29.851 ********* 2025-05-14 02:38:13.250084 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:38:13.250088 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:38:13.250091 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:38:13.250095 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:38:13.250099 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:38:13.250102 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:38:13.250106 | orchestrator | 2025-05-14 02:38:13.250110 | orchestrator | TASK [ceph-crash : include_tasks systemd.yml] ********************************** 2025-05-14 02:38:13.250113 | orchestrator | Wednesday 14 May 2025 02:35:23 +0000 (0:00:01.285) 0:10:31.137 ********* 2025-05-14 02:38:13.250117 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:38:13.250122 | orchestrator | 2025-05-14 02:38:13.250125 | orchestrator | TASK [ceph-crash : generate systemd unit file for ceph-crash container] ******** 2025-05-14 02:38:13.250129 | orchestrator | Wednesday 14 May 2025 02:35:25 +0000 (0:00:01.420) 0:10:32.558 ********* 2025-05-14 02:38:13.250133 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:38:13.250136 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:38:13.250144 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:38:13.250147 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:38:13.250151 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:38:13.250155 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:38:13.250158 | orchestrator | 2025-05-14 02:38:13.250162 | orchestrator | TASK [ceph-crash : start the ceph-crash service] ******************************* 2025-05-14 02:38:13.250166 | orchestrator | Wednesday 14 May 2025 02:35:27 +0000 (0:00:01.731) 0:10:34.290 ********* 2025-05-14 02:38:13.250170 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:38:13.250173 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:38:13.250177 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:38:13.250181 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:38:13.250184 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:38:13.250188 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:38:13.250191 | orchestrator | 2025-05-14 02:38:13.250195 | orchestrator | RUNNING HANDLER [ceph-handler : ceph crash handler] **************************** 2025-05-14 02:38:13.250199 | orchestrator | Wednesday 14 May 2025 02:35:31 +0000 (0:00:04.319) 0:10:38.609 ********* 2025-05-14 02:38:13.250203 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:38:13.250206 | orchestrator | 2025-05-14 02:38:13.250210 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called before restart] ****** 2025-05-14 02:38:13.250214 | orchestrator | Wednesday 14 May 2025 02:35:32 +0000 (0:00:01.331) 0:10:39.940 ********* 2025-05-14 02:38:13.250217 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.250221 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.250225 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.250229 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.250232 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.250236 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.250239 | orchestrator | 2025-05-14 02:38:13.250243 | orchestrator | RUNNING HANDLER [ceph-handler : restart the ceph-crash service] **************** 2025-05-14 02:38:13.250247 | orchestrator | Wednesday 14 May 2025 02:35:33 +0000 (0:00:00.639) 0:10:40.580 ********* 2025-05-14 02:38:13.250251 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:38:13.250254 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:38:13.250258 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:38:13.250262 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:38:13.250265 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:38:13.250269 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:38:13.250273 | orchestrator | 2025-05-14 02:38:13.250276 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called after restart] ******* 2025-05-14 02:38:13.250280 | orchestrator | Wednesday 14 May 2025 02:35:35 +0000 (0:00:02.394) 0:10:42.975 ********* 2025-05-14 02:38:13.250284 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:13.250288 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:13.250291 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:13.250295 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.250299 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.250303 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.250306 | orchestrator | 2025-05-14 02:38:13.250310 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-05-14 02:38:13.250314 | orchestrator | 2025-05-14 02:38:13.250317 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-14 02:38:13.250324 | orchestrator | Wednesday 14 May 2025 02:35:37 +0000 (0:00:02.263) 0:10:45.238 ********* 2025-05-14 02:38:13.250328 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:38:13.250332 | orchestrator | 2025-05-14 02:38:13.250335 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-14 02:38:13.250339 | orchestrator | Wednesday 14 May 2025 02:35:38 +0000 (0:00:00.479) 0:10:45.717 ********* 2025-05-14 02:38:13.250343 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.250350 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.250354 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.250357 | orchestrator | 2025-05-14 02:38:13.250361 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-14 02:38:13.250365 | orchestrator | Wednesday 14 May 2025 02:35:38 +0000 (0:00:00.485) 0:10:46.202 ********* 2025-05-14 02:38:13.250368 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.250372 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.250376 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.250379 | orchestrator | 2025-05-14 02:38:13.250383 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-14 02:38:13.250387 | orchestrator | Wednesday 14 May 2025 02:35:39 +0000 (0:00:00.744) 0:10:46.947 ********* 2025-05-14 02:38:13.250390 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.250394 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.250398 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.250402 | orchestrator | 2025-05-14 02:38:13.250405 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-14 02:38:13.250411 | orchestrator | Wednesday 14 May 2025 02:35:40 +0000 (0:00:00.700) 0:10:47.647 ********* 2025-05-14 02:38:13.250415 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.250419 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.250423 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.250426 | orchestrator | 2025-05-14 02:38:13.250430 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-14 02:38:13.250434 | orchestrator | Wednesday 14 May 2025 02:35:41 +0000 (0:00:00.689) 0:10:48.336 ********* 2025-05-14 02:38:13.250437 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.250441 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.250445 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.250448 | orchestrator | 2025-05-14 02:38:13.250452 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-14 02:38:13.250456 | orchestrator | Wednesday 14 May 2025 02:35:41 +0000 (0:00:00.625) 0:10:48.961 ********* 2025-05-14 02:38:13.250459 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.250463 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.250467 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.250471 | orchestrator | 2025-05-14 02:38:13.250474 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-14 02:38:13.250478 | orchestrator | Wednesday 14 May 2025 02:35:42 +0000 (0:00:00.335) 0:10:49.297 ********* 2025-05-14 02:38:13.250482 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.250485 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.250489 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.250493 | orchestrator | 2025-05-14 02:38:13.250496 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-14 02:38:13.250500 | orchestrator | Wednesday 14 May 2025 02:35:42 +0000 (0:00:00.324) 0:10:49.621 ********* 2025-05-14 02:38:13.250504 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.250507 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.250511 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.250514 | orchestrator | 2025-05-14 02:38:13.250518 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-14 02:38:13.250522 | orchestrator | Wednesday 14 May 2025 02:35:42 +0000 (0:00:00.316) 0:10:49.938 ********* 2025-05-14 02:38:13.250526 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.250529 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.250533 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.250537 | orchestrator | 2025-05-14 02:38:13.250540 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-14 02:38:13.250544 | orchestrator | Wednesday 14 May 2025 02:35:43 +0000 (0:00:00.628) 0:10:50.566 ********* 2025-05-14 02:38:13.250548 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.250551 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.250558 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.250562 | orchestrator | 2025-05-14 02:38:13.250565 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-14 02:38:13.250569 | orchestrator | Wednesday 14 May 2025 02:35:43 +0000 (0:00:00.323) 0:10:50.890 ********* 2025-05-14 02:38:13.250573 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.250576 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.250580 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.250584 | orchestrator | 2025-05-14 02:38:13.250618 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-14 02:38:13.250622 | orchestrator | Wednesday 14 May 2025 02:35:44 +0000 (0:00:00.744) 0:10:51.635 ********* 2025-05-14 02:38:13.250625 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.250629 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.250633 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.250636 | orchestrator | 2025-05-14 02:38:13.250640 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-14 02:38:13.250644 | orchestrator | Wednesday 14 May 2025 02:35:44 +0000 (0:00:00.366) 0:10:52.001 ********* 2025-05-14 02:38:13.250647 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.250651 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.250655 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.250658 | orchestrator | 2025-05-14 02:38:13.250662 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-14 02:38:13.250666 | orchestrator | Wednesday 14 May 2025 02:35:45 +0000 (0:00:00.628) 0:10:52.630 ********* 2025-05-14 02:38:13.250669 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.250673 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.250677 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.250680 | orchestrator | 2025-05-14 02:38:13.250684 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-14 02:38:13.250692 | orchestrator | Wednesday 14 May 2025 02:35:45 +0000 (0:00:00.331) 0:10:52.961 ********* 2025-05-14 02:38:13.250696 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.250700 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.250703 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.250707 | orchestrator | 2025-05-14 02:38:13.250711 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-14 02:38:13.250714 | orchestrator | Wednesday 14 May 2025 02:35:46 +0000 (0:00:00.352) 0:10:53.313 ********* 2025-05-14 02:38:13.250718 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.250722 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.250726 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.250729 | orchestrator | 2025-05-14 02:38:13.250733 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-14 02:38:13.250737 | orchestrator | Wednesday 14 May 2025 02:35:46 +0000 (0:00:00.347) 0:10:53.660 ********* 2025-05-14 02:38:13.250740 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.250744 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.250748 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.250751 | orchestrator | 2025-05-14 02:38:13.250755 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-14 02:38:13.250759 | orchestrator | Wednesday 14 May 2025 02:35:47 +0000 (0:00:00.624) 0:10:54.285 ********* 2025-05-14 02:38:13.250762 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.250766 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.250769 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.250773 | orchestrator | 2025-05-14 02:38:13.250777 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-14 02:38:13.250780 | orchestrator | Wednesday 14 May 2025 02:35:47 +0000 (0:00:00.319) 0:10:54.605 ********* 2025-05-14 02:38:13.250788 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.250792 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.250796 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.250799 | orchestrator | 2025-05-14 02:38:13.250806 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-14 02:38:13.250810 | orchestrator | Wednesday 14 May 2025 02:35:47 +0000 (0:00:00.310) 0:10:54.915 ********* 2025-05-14 02:38:13.250814 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.250817 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.250821 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.250825 | orchestrator | 2025-05-14 02:38:13.250829 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-14 02:38:13.250832 | orchestrator | Wednesday 14 May 2025 02:35:47 +0000 (0:00:00.347) 0:10:55.262 ********* 2025-05-14 02:38:13.250836 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.250840 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.250843 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.250847 | orchestrator | 2025-05-14 02:38:13.250850 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-14 02:38:13.250854 | orchestrator | Wednesday 14 May 2025 02:35:48 +0000 (0:00:00.650) 0:10:55.913 ********* 2025-05-14 02:38:13.250858 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.250861 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.250865 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.250869 | orchestrator | 2025-05-14 02:38:13.250872 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-14 02:38:13.250876 | orchestrator | Wednesday 14 May 2025 02:35:49 +0000 (0:00:00.365) 0:10:56.278 ********* 2025-05-14 02:38:13.250880 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.250884 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.250887 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.250891 | orchestrator | 2025-05-14 02:38:13.250895 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-14 02:38:13.250898 | orchestrator | Wednesday 14 May 2025 02:35:49 +0000 (0:00:00.304) 0:10:56.583 ********* 2025-05-14 02:38:13.250902 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.250906 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.250909 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.250913 | orchestrator | 2025-05-14 02:38:13.250917 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-14 02:38:13.250920 | orchestrator | Wednesday 14 May 2025 02:35:49 +0000 (0:00:00.321) 0:10:56.904 ********* 2025-05-14 02:38:13.250924 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.250928 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.250931 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.250935 | orchestrator | 2025-05-14 02:38:13.250939 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-14 02:38:13.250942 | orchestrator | Wednesday 14 May 2025 02:35:50 +0000 (0:00:00.638) 0:10:57.543 ********* 2025-05-14 02:38:13.250946 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.250950 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.250954 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.250957 | orchestrator | 2025-05-14 02:38:13.250961 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-14 02:38:13.250964 | orchestrator | Wednesday 14 May 2025 02:35:50 +0000 (0:00:00.305) 0:10:57.849 ********* 2025-05-14 02:38:13.250968 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.250972 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.250975 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.250979 | orchestrator | 2025-05-14 02:38:13.250983 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-14 02:38:13.250987 | orchestrator | Wednesday 14 May 2025 02:35:50 +0000 (0:00:00.324) 0:10:58.173 ********* 2025-05-14 02:38:13.250990 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.250994 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.250998 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.251001 | orchestrator | 2025-05-14 02:38:13.251005 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-14 02:38:13.251015 | orchestrator | Wednesday 14 May 2025 02:35:51 +0000 (0:00:00.317) 0:10:58.490 ********* 2025-05-14 02:38:13.251019 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.251022 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.251026 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.251030 | orchestrator | 2025-05-14 02:38:13.251037 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-14 02:38:13.251041 | orchestrator | Wednesday 14 May 2025 02:35:51 +0000 (0:00:00.622) 0:10:59.112 ********* 2025-05-14 02:38:13.251044 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.251048 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.251052 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.251055 | orchestrator | 2025-05-14 02:38:13.251059 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-14 02:38:13.251063 | orchestrator | Wednesday 14 May 2025 02:35:52 +0000 (0:00:00.345) 0:10:59.458 ********* 2025-05-14 02:38:13.251066 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.251070 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.251074 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.251077 | orchestrator | 2025-05-14 02:38:13.251081 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-14 02:38:13.251085 | orchestrator | Wednesday 14 May 2025 02:35:52 +0000 (0:00:00.343) 0:10:59.801 ********* 2025-05-14 02:38:13.251088 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.251092 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.251096 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.251099 | orchestrator | 2025-05-14 02:38:13.251103 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-14 02:38:13.251107 | orchestrator | Wednesday 14 May 2025 02:35:52 +0000 (0:00:00.318) 0:11:00.120 ********* 2025-05-14 02:38:13.251111 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-14 02:38:13.251114 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-14 02:38:13.251118 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.251135 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-14 02:38:13.251139 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-14 02:38:13.251143 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.251146 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-14 02:38:13.251150 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-14 02:38:13.251154 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.251157 | orchestrator | 2025-05-14 02:38:13.251161 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-14 02:38:13.251165 | orchestrator | Wednesday 14 May 2025 02:35:53 +0000 (0:00:00.680) 0:11:00.800 ********* 2025-05-14 02:38:13.251168 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-14 02:38:13.251172 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-14 02:38:13.251176 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.251179 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-14 02:38:13.251183 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-14 02:38:13.251187 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.251190 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-14 02:38:13.251194 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-14 02:38:13.251198 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.251201 | orchestrator | 2025-05-14 02:38:13.251205 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-14 02:38:13.251209 | orchestrator | Wednesday 14 May 2025 02:35:53 +0000 (0:00:00.359) 0:11:01.160 ********* 2025-05-14 02:38:13.251212 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.251220 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.251224 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.251227 | orchestrator | 2025-05-14 02:38:13.251231 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-14 02:38:13.251235 | orchestrator | Wednesday 14 May 2025 02:35:54 +0000 (0:00:00.359) 0:11:01.519 ********* 2025-05-14 02:38:13.251238 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.251242 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.251246 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.251249 | orchestrator | 2025-05-14 02:38:13.251253 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-14 02:38:13.251257 | orchestrator | Wednesday 14 May 2025 02:35:54 +0000 (0:00:00.352) 0:11:01.872 ********* 2025-05-14 02:38:13.251261 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.251264 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.251268 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.251272 | orchestrator | 2025-05-14 02:38:13.251275 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-14 02:38:13.251279 | orchestrator | Wednesday 14 May 2025 02:35:55 +0000 (0:00:00.645) 0:11:02.518 ********* 2025-05-14 02:38:13.251283 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.251287 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.251290 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.251294 | orchestrator | 2025-05-14 02:38:13.251298 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-14 02:38:13.251301 | orchestrator | Wednesday 14 May 2025 02:35:55 +0000 (0:00:00.316) 0:11:02.835 ********* 2025-05-14 02:38:13.251305 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.251309 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.251312 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.251316 | orchestrator | 2025-05-14 02:38:13.251320 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-14 02:38:13.251323 | orchestrator | Wednesday 14 May 2025 02:35:55 +0000 (0:00:00.412) 0:11:03.247 ********* 2025-05-14 02:38:13.251327 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.251331 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.251334 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.251338 | orchestrator | 2025-05-14 02:38:13.251342 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-14 02:38:13.251345 | orchestrator | Wednesday 14 May 2025 02:35:56 +0000 (0:00:00.406) 0:11:03.654 ********* 2025-05-14 02:38:13.251349 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:38:13.251355 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:38:13.251359 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:38:13.251363 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.251366 | orchestrator | 2025-05-14 02:38:13.251370 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-14 02:38:13.251374 | orchestrator | Wednesday 14 May 2025 02:35:57 +0000 (0:00:01.102) 0:11:04.756 ********* 2025-05-14 02:38:13.251378 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:38:13.251381 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:38:13.251385 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:38:13.251389 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.251392 | orchestrator | 2025-05-14 02:38:13.251396 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-14 02:38:13.251400 | orchestrator | Wednesday 14 May 2025 02:35:57 +0000 (0:00:00.466) 0:11:05.223 ********* 2025-05-14 02:38:13.251404 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:38:13.251407 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:38:13.251411 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:38:13.251418 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.251421 | orchestrator | 2025-05-14 02:38:13.251425 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 02:38:13.251429 | orchestrator | Wednesday 14 May 2025 02:35:58 +0000 (0:00:00.494) 0:11:05.718 ********* 2025-05-14 02:38:13.251433 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.251436 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.251442 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.251446 | orchestrator | 2025-05-14 02:38:13.251449 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-14 02:38:13.251453 | orchestrator | Wednesday 14 May 2025 02:35:58 +0000 (0:00:00.330) 0:11:06.049 ********* 2025-05-14 02:38:13.251457 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-14 02:38:13.251461 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.251464 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-14 02:38:13.251468 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.251472 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-14 02:38:13.251475 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.251479 | orchestrator | 2025-05-14 02:38:13.251483 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-14 02:38:13.251487 | orchestrator | Wednesday 14 May 2025 02:35:59 +0000 (0:00:00.521) 0:11:06.571 ********* 2025-05-14 02:38:13.251490 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.251494 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.251498 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.251501 | orchestrator | 2025-05-14 02:38:13.251505 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 02:38:13.251509 | orchestrator | Wednesday 14 May 2025 02:36:00 +0000 (0:00:00.735) 0:11:07.306 ********* 2025-05-14 02:38:13.251512 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.251516 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.251520 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.251523 | orchestrator | 2025-05-14 02:38:13.251527 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-14 02:38:13.251531 | orchestrator | Wednesday 14 May 2025 02:36:00 +0000 (0:00:00.444) 0:11:07.751 ********* 2025-05-14 02:38:13.251534 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-14 02:38:13.251538 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.251542 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-14 02:38:13.251545 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.251549 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-14 02:38:13.251553 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.251556 | orchestrator | 2025-05-14 02:38:13.251561 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-14 02:38:13.251567 | orchestrator | Wednesday 14 May 2025 02:36:01 +0000 (0:00:00.516) 0:11:08.268 ********* 2025-05-14 02:38:13.251573 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-14 02:38:13.251580 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.251596 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-14 02:38:13.251602 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.251608 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-14 02:38:13.251614 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.251620 | orchestrator | 2025-05-14 02:38:13.251625 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-14 02:38:13.251630 | orchestrator | Wednesday 14 May 2025 02:36:01 +0000 (0:00:00.385) 0:11:08.654 ********* 2025-05-14 02:38:13.251636 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:38:13.251646 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:38:13.251652 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:38:13.251658 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.251664 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-14 02:38:13.251670 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-14 02:38:13.251676 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-14 02:38:13.251681 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.251687 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-14 02:38:13.251693 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-14 02:38:13.251702 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-14 02:38:13.251708 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.251713 | orchestrator | 2025-05-14 02:38:13.251719 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-14 02:38:13.251724 | orchestrator | Wednesday 14 May 2025 02:36:02 +0000 (0:00:00.890) 0:11:09.544 ********* 2025-05-14 02:38:13.251731 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.251736 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.251742 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.251748 | orchestrator | 2025-05-14 02:38:13.251754 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-14 02:38:13.251760 | orchestrator | Wednesday 14 May 2025 02:36:02 +0000 (0:00:00.575) 0:11:10.120 ********* 2025-05-14 02:38:13.251766 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-14 02:38:13.251771 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.251777 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-14 02:38:13.251783 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.251789 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-14 02:38:13.251795 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.251800 | orchestrator | 2025-05-14 02:38:13.251806 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-14 02:38:13.251812 | orchestrator | Wednesday 14 May 2025 02:36:03 +0000 (0:00:00.931) 0:11:11.051 ********* 2025-05-14 02:38:13.251818 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.251823 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.251829 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.251835 | orchestrator | 2025-05-14 02:38:13.251841 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-14 02:38:13.251850 | orchestrator | Wednesday 14 May 2025 02:36:04 +0000 (0:00:00.566) 0:11:11.618 ********* 2025-05-14 02:38:13.251856 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.251861 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.251867 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.251873 | orchestrator | 2025-05-14 02:38:13.251879 | orchestrator | TASK [ceph-mds : include create_mds_filesystems.yml] *************************** 2025-05-14 02:38:13.251884 | orchestrator | Wednesday 14 May 2025 02:36:05 +0000 (0:00:00.797) 0:11:12.415 ********* 2025-05-14 02:38:13.251890 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.251896 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.251901 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-05-14 02:38:13.251907 | orchestrator | 2025-05-14 02:38:13.251913 | orchestrator | TASK [ceph-facts : get current default crush rule details] ********************* 2025-05-14 02:38:13.251919 | orchestrator | Wednesday 14 May 2025 02:36:05 +0000 (0:00:00.415) 0:11:12.830 ********* 2025-05-14 02:38:13.251925 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-14 02:38:13.251931 | orchestrator | 2025-05-14 02:38:13.251936 | orchestrator | TASK [ceph-facts : get current default crush rule name] ************************ 2025-05-14 02:38:13.251942 | orchestrator | Wednesday 14 May 2025 02:36:07 +0000 (0:00:01.799) 0:11:14.630 ********* 2025-05-14 02:38:13.251954 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-05-14 02:38:13.251961 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.251967 | orchestrator | 2025-05-14 02:38:13.251973 | orchestrator | TASK [ceph-mds : create filesystem pools] ************************************** 2025-05-14 02:38:13.251978 | orchestrator | Wednesday 14 May 2025 02:36:07 +0000 (0:00:00.614) 0:11:15.244 ********* 2025-05-14 02:38:13.251986 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-14 02:38:13.251998 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-14 02:38:13.252004 | orchestrator | 2025-05-14 02:38:13.252009 | orchestrator | TASK [ceph-mds : create ceph filesystem] *************************************** 2025-05-14 02:38:13.252015 | orchestrator | Wednesday 14 May 2025 02:36:14 +0000 (0:00:06.898) 0:11:22.142 ********* 2025-05-14 02:38:13.252021 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-14 02:38:13.252027 | orchestrator | 2025-05-14 02:38:13.252032 | orchestrator | TASK [ceph-mds : include common.yml] ******************************************* 2025-05-14 02:38:13.252038 | orchestrator | Wednesday 14 May 2025 02:36:17 +0000 (0:00:03.111) 0:11:25.253 ********* 2025-05-14 02:38:13.252044 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:38:13.252050 | orchestrator | 2025-05-14 02:38:13.252055 | orchestrator | TASK [ceph-mds : create bootstrap-mds and mds directories] ********************* 2025-05-14 02:38:13.252061 | orchestrator | Wednesday 14 May 2025 02:36:18 +0000 (0:00:00.617) 0:11:25.871 ********* 2025-05-14 02:38:13.252067 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-14 02:38:13.252072 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-14 02:38:13.252078 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-14 02:38:13.252084 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-05-14 02:38:13.252093 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-05-14 02:38:13.252099 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-05-14 02:38:13.252105 | orchestrator | 2025-05-14 02:38:13.252112 | orchestrator | TASK [ceph-mds : get keys from monitors] *************************************** 2025-05-14 02:38:13.252118 | orchestrator | Wednesday 14 May 2025 02:36:19 +0000 (0:00:01.051) 0:11:26.923 ********* 2025-05-14 02:38:13.252123 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 02:38:13.252129 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-14 02:38:13.252135 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-14 02:38:13.252142 | orchestrator | 2025-05-14 02:38:13.252149 | orchestrator | TASK [ceph-mds : copy ceph key(s) if needed] *********************************** 2025-05-14 02:38:13.252155 | orchestrator | Wednesday 14 May 2025 02:36:21 +0000 (0:00:01.748) 0:11:28.672 ********* 2025-05-14 02:38:13.252161 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-14 02:38:13.252167 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-14 02:38:13.252173 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:38:13.252177 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-14 02:38:13.252180 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-14 02:38:13.252188 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:38:13.252192 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-14 02:38:13.252196 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-14 02:38:13.252199 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:38:13.252203 | orchestrator | 2025-05-14 02:38:13.252207 | orchestrator | TASK [ceph-mds : non_containerized.yml] **************************************** 2025-05-14 02:38:13.252213 | orchestrator | Wednesday 14 May 2025 02:36:22 +0000 (0:00:01.188) 0:11:29.860 ********* 2025-05-14 02:38:13.252217 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.252220 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.252224 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.252228 | orchestrator | 2025-05-14 02:38:13.252232 | orchestrator | TASK [ceph-mds : containerized.yml] ******************************************** 2025-05-14 02:38:13.252235 | orchestrator | Wednesday 14 May 2025 02:36:22 +0000 (0:00:00.370) 0:11:30.230 ********* 2025-05-14 02:38:13.252239 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:38:13.252243 | orchestrator | 2025-05-14 02:38:13.252246 | orchestrator | TASK [ceph-mds : include_tasks systemd.yml] ************************************ 2025-05-14 02:38:13.252250 | orchestrator | Wednesday 14 May 2025 02:36:23 +0000 (0:00:00.851) 0:11:31.081 ********* 2025-05-14 02:38:13.252254 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:38:13.252258 | orchestrator | 2025-05-14 02:38:13.252261 | orchestrator | TASK [ceph-mds : generate systemd unit file] *********************************** 2025-05-14 02:38:13.252265 | orchestrator | Wednesday 14 May 2025 02:36:24 +0000 (0:00:00.558) 0:11:31.640 ********* 2025-05-14 02:38:13.252269 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:38:13.252272 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:38:13.252276 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:38:13.252280 | orchestrator | 2025-05-14 02:38:13.252284 | orchestrator | TASK [ceph-mds : generate systemd ceph-mds target file] ************************ 2025-05-14 02:38:13.252288 | orchestrator | Wednesday 14 May 2025 02:36:25 +0000 (0:00:01.575) 0:11:33.216 ********* 2025-05-14 02:38:13.252292 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:38:13.252295 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:38:13.252299 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:38:13.252303 | orchestrator | 2025-05-14 02:38:13.252306 | orchestrator | TASK [ceph-mds : enable ceph-mds.target] *************************************** 2025-05-14 02:38:13.252310 | orchestrator | Wednesday 14 May 2025 02:36:27 +0000 (0:00:01.160) 0:11:34.376 ********* 2025-05-14 02:38:13.252314 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:38:13.252317 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:38:13.252321 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:38:13.252325 | orchestrator | 2025-05-14 02:38:13.252329 | orchestrator | TASK [ceph-mds : systemd start mds container] ********************************** 2025-05-14 02:38:13.252332 | orchestrator | Wednesday 14 May 2025 02:36:28 +0000 (0:00:01.621) 0:11:35.998 ********* 2025-05-14 02:38:13.252336 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:38:13.252340 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:38:13.252343 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:38:13.252347 | orchestrator | 2025-05-14 02:38:13.252351 | orchestrator | TASK [ceph-mds : wait for mds socket to exist] ********************************* 2025-05-14 02:38:13.252355 | orchestrator | Wednesday 14 May 2025 02:36:30 +0000 (0:00:01.884) 0:11:37.883 ********* 2025-05-14 02:38:13.252358 | orchestrator | FAILED - RETRYING: [testbed-node-3]: wait for mds socket to exist (5 retries left). 2025-05-14 02:38:13.252362 | orchestrator | FAILED - RETRYING: [testbed-node-4]: wait for mds socket to exist (5 retries left). 2025-05-14 02:38:13.252366 | orchestrator | FAILED - RETRYING: [testbed-node-5]: wait for mds socket to exist (5 retries left). 2025-05-14 02:38:13.252369 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.252373 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.252380 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.252384 | orchestrator | 2025-05-14 02:38:13.252387 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-14 02:38:13.252391 | orchestrator | Wednesday 14 May 2025 02:36:47 +0000 (0:00:17.053) 0:11:54.937 ********* 2025-05-14 02:38:13.252395 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:38:13.252398 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:38:13.252402 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:38:13.252406 | orchestrator | 2025-05-14 02:38:13.252409 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-05-14 02:38:13.252413 | orchestrator | Wednesday 14 May 2025 02:36:48 +0000 (0:00:00.674) 0:11:55.611 ********* 2025-05-14 02:38:13.252417 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:38:13.252420 | orchestrator | 2025-05-14 02:38:13.252427 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called before restart] ******** 2025-05-14 02:38:13.252431 | orchestrator | Wednesday 14 May 2025 02:36:49 +0000 (0:00:00.791) 0:11:56.403 ********* 2025-05-14 02:38:13.252435 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.252439 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.252442 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.252446 | orchestrator | 2025-05-14 02:38:13.252450 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-05-14 02:38:13.252453 | orchestrator | Wednesday 14 May 2025 02:36:49 +0000 (0:00:00.343) 0:11:56.746 ********* 2025-05-14 02:38:13.252457 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:38:13.252461 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:38:13.252464 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:38:13.252468 | orchestrator | 2025-05-14 02:38:13.252472 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mds daemon(s)] ******************** 2025-05-14 02:38:13.252476 | orchestrator | Wednesday 14 May 2025 02:36:50 +0000 (0:00:01.226) 0:11:57.972 ********* 2025-05-14 02:38:13.252479 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:38:13.252483 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:38:13.252487 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:38:13.252490 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.252494 | orchestrator | 2025-05-14 02:38:13.252498 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-05-14 02:38:13.252502 | orchestrator | Wednesday 14 May 2025 02:36:51 +0000 (0:00:00.902) 0:11:58.875 ********* 2025-05-14 02:38:13.252505 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.252509 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.252515 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.252519 | orchestrator | 2025-05-14 02:38:13.252522 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-14 02:38:13.252526 | orchestrator | Wednesday 14 May 2025 02:36:52 +0000 (0:00:00.603) 0:11:59.478 ********* 2025-05-14 02:38:13.252530 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:38:13.252533 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:38:13.252537 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:38:13.252541 | orchestrator | 2025-05-14 02:38:13.252544 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-05-14 02:38:13.252548 | orchestrator | 2025-05-14 02:38:13.252552 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-14 02:38:13.252555 | orchestrator | Wednesday 14 May 2025 02:36:54 +0000 (0:00:02.082) 0:12:01.560 ********* 2025-05-14 02:38:13.252559 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:38:13.252563 | orchestrator | 2025-05-14 02:38:13.252567 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-14 02:38:13.252570 | orchestrator | Wednesday 14 May 2025 02:36:55 +0000 (0:00:00.762) 0:12:02.322 ********* 2025-05-14 02:38:13.252574 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.252581 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.252585 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.252600 | orchestrator | 2025-05-14 02:38:13.252604 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-14 02:38:13.252607 | orchestrator | Wednesday 14 May 2025 02:36:55 +0000 (0:00:00.336) 0:12:02.659 ********* 2025-05-14 02:38:13.252611 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.252615 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.252618 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.252622 | orchestrator | 2025-05-14 02:38:13.252626 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-14 02:38:13.252630 | orchestrator | Wednesday 14 May 2025 02:36:56 +0000 (0:00:00.728) 0:12:03.388 ********* 2025-05-14 02:38:13.252633 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.252637 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.252641 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.252644 | orchestrator | 2025-05-14 02:38:13.252648 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-14 02:38:13.252652 | orchestrator | Wednesday 14 May 2025 02:36:56 +0000 (0:00:00.713) 0:12:04.101 ********* 2025-05-14 02:38:13.252656 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.252659 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.252663 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.252667 | orchestrator | 2025-05-14 02:38:13.252670 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-14 02:38:13.252674 | orchestrator | Wednesday 14 May 2025 02:36:57 +0000 (0:00:01.058) 0:12:05.160 ********* 2025-05-14 02:38:13.252678 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.252682 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.252687 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.252693 | orchestrator | 2025-05-14 02:38:13.252699 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-14 02:38:13.252705 | orchestrator | Wednesday 14 May 2025 02:36:58 +0000 (0:00:00.390) 0:12:05.550 ********* 2025-05-14 02:38:13.252711 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.252717 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.252722 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.252729 | orchestrator | 2025-05-14 02:38:13.252735 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-14 02:38:13.252741 | orchestrator | Wednesday 14 May 2025 02:36:58 +0000 (0:00:00.313) 0:12:05.864 ********* 2025-05-14 02:38:13.252747 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.252753 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.252759 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.252765 | orchestrator | 2025-05-14 02:38:13.252769 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-14 02:38:13.252773 | orchestrator | Wednesday 14 May 2025 02:36:58 +0000 (0:00:00.324) 0:12:06.189 ********* 2025-05-14 02:38:13.252776 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.252780 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.252784 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.252788 | orchestrator | 2025-05-14 02:38:13.252791 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-14 02:38:13.252798 | orchestrator | Wednesday 14 May 2025 02:36:59 +0000 (0:00:00.630) 0:12:06.819 ********* 2025-05-14 02:38:13.252802 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.252806 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.252810 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.252813 | orchestrator | 2025-05-14 02:38:13.252817 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-14 02:38:13.252821 | orchestrator | Wednesday 14 May 2025 02:36:59 +0000 (0:00:00.312) 0:12:07.131 ********* 2025-05-14 02:38:13.252824 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.252828 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.252837 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.252841 | orchestrator | 2025-05-14 02:38:13.252845 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-14 02:38:13.252849 | orchestrator | Wednesday 14 May 2025 02:37:00 +0000 (0:00:00.338) 0:12:07.470 ********* 2025-05-14 02:38:13.252853 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.252856 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.252860 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.252864 | orchestrator | 2025-05-14 02:38:13.252867 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-14 02:38:13.252871 | orchestrator | Wednesday 14 May 2025 02:37:00 +0000 (0:00:00.708) 0:12:08.178 ********* 2025-05-14 02:38:13.252875 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.252879 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.252882 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.252886 | orchestrator | 2025-05-14 02:38:13.252889 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-14 02:38:13.252893 | orchestrator | Wednesday 14 May 2025 02:37:01 +0000 (0:00:00.632) 0:12:08.811 ********* 2025-05-14 02:38:13.252899 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.252903 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.252907 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.252910 | orchestrator | 2025-05-14 02:38:13.252914 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-14 02:38:13.252918 | orchestrator | Wednesday 14 May 2025 02:37:01 +0000 (0:00:00.304) 0:12:09.116 ********* 2025-05-14 02:38:13.252921 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.252925 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.252929 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.252932 | orchestrator | 2025-05-14 02:38:13.252936 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-14 02:38:13.252940 | orchestrator | Wednesday 14 May 2025 02:37:02 +0000 (0:00:00.318) 0:12:09.434 ********* 2025-05-14 02:38:13.252943 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.252947 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.252951 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.252954 | orchestrator | 2025-05-14 02:38:13.252958 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-14 02:38:13.252962 | orchestrator | Wednesday 14 May 2025 02:37:02 +0000 (0:00:00.312) 0:12:09.747 ********* 2025-05-14 02:38:13.252965 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.252969 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.252973 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.252976 | orchestrator | 2025-05-14 02:38:13.252980 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-14 02:38:13.252984 | orchestrator | Wednesday 14 May 2025 02:37:03 +0000 (0:00:00.633) 0:12:10.380 ********* 2025-05-14 02:38:13.252987 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.252991 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.252995 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.252998 | orchestrator | 2025-05-14 02:38:13.253002 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-14 02:38:13.253006 | orchestrator | Wednesday 14 May 2025 02:37:03 +0000 (0:00:00.392) 0:12:10.772 ********* 2025-05-14 02:38:13.253009 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.253013 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.253017 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.253020 | orchestrator | 2025-05-14 02:38:13.253024 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-14 02:38:13.253028 | orchestrator | Wednesday 14 May 2025 02:37:03 +0000 (0:00:00.303) 0:12:11.076 ********* 2025-05-14 02:38:13.253031 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.253035 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.253039 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.253042 | orchestrator | 2025-05-14 02:38:13.253049 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-14 02:38:13.253053 | orchestrator | Wednesday 14 May 2025 02:37:04 +0000 (0:00:00.334) 0:12:11.410 ********* 2025-05-14 02:38:13.253056 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.253060 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.253064 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.253067 | orchestrator | 2025-05-14 02:38:13.253071 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-14 02:38:13.253075 | orchestrator | Wednesday 14 May 2025 02:37:04 +0000 (0:00:00.604) 0:12:12.014 ********* 2025-05-14 02:38:13.253078 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.253082 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.253086 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.253089 | orchestrator | 2025-05-14 02:38:13.253093 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-14 02:38:13.253097 | orchestrator | Wednesday 14 May 2025 02:37:05 +0000 (0:00:00.357) 0:12:12.372 ********* 2025-05-14 02:38:13.253100 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.253104 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.253108 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.253111 | orchestrator | 2025-05-14 02:38:13.253115 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-14 02:38:13.253119 | orchestrator | Wednesday 14 May 2025 02:37:05 +0000 (0:00:00.332) 0:12:12.704 ********* 2025-05-14 02:38:13.253122 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.253126 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.253129 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.253133 | orchestrator | 2025-05-14 02:38:13.253137 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-14 02:38:13.253143 | orchestrator | Wednesday 14 May 2025 02:37:05 +0000 (0:00:00.329) 0:12:13.034 ********* 2025-05-14 02:38:13.253147 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.253151 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.253154 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.253158 | orchestrator | 2025-05-14 02:38:13.253162 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-14 02:38:13.253166 | orchestrator | Wednesday 14 May 2025 02:37:06 +0000 (0:00:00.612) 0:12:13.647 ********* 2025-05-14 02:38:13.253169 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.253173 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.253177 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.253180 | orchestrator | 2025-05-14 02:38:13.253184 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-14 02:38:13.253188 | orchestrator | Wednesday 14 May 2025 02:37:06 +0000 (0:00:00.379) 0:12:14.026 ********* 2025-05-14 02:38:13.253191 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.253195 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.253200 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.253206 | orchestrator | 2025-05-14 02:38:13.253212 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-14 02:38:13.253218 | orchestrator | Wednesday 14 May 2025 02:37:07 +0000 (0:00:00.337) 0:12:14.364 ********* 2025-05-14 02:38:13.253224 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.253230 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.253236 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.253242 | orchestrator | 2025-05-14 02:38:13.253248 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-14 02:38:13.253257 | orchestrator | Wednesday 14 May 2025 02:37:07 +0000 (0:00:00.325) 0:12:14.689 ********* 2025-05-14 02:38:13.253264 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.253269 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.253276 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.253281 | orchestrator | 2025-05-14 02:38:13.253287 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-14 02:38:13.253297 | orchestrator | Wednesday 14 May 2025 02:37:08 +0000 (0:00:00.612) 0:12:15.302 ********* 2025-05-14 02:38:13.253304 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.253310 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.253316 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.253322 | orchestrator | 2025-05-14 02:38:13.253328 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-14 02:38:13.253334 | orchestrator | Wednesday 14 May 2025 02:37:08 +0000 (0:00:00.348) 0:12:15.651 ********* 2025-05-14 02:38:13.253340 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.253346 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.253351 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.253357 | orchestrator | 2025-05-14 02:38:13.253362 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-14 02:38:13.253368 | orchestrator | Wednesday 14 May 2025 02:37:08 +0000 (0:00:00.351) 0:12:16.003 ********* 2025-05-14 02:38:13.253374 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.253380 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.253385 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.253391 | orchestrator | 2025-05-14 02:38:13.253397 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-14 02:38:13.253403 | orchestrator | Wednesday 14 May 2025 02:37:09 +0000 (0:00:00.377) 0:12:16.380 ********* 2025-05-14 02:38:13.253408 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.253414 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.253420 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.253425 | orchestrator | 2025-05-14 02:38:13.253431 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-14 02:38:13.253437 | orchestrator | Wednesday 14 May 2025 02:37:09 +0000 (0:00:00.642) 0:12:17.023 ********* 2025-05-14 02:38:13.253443 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-14 02:38:13.253448 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-14 02:38:13.253454 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.253460 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-14 02:38:13.253466 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-14 02:38:13.253471 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.253477 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-14 02:38:13.253483 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-14 02:38:13.253488 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.253494 | orchestrator | 2025-05-14 02:38:13.253500 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-14 02:38:13.253505 | orchestrator | Wednesday 14 May 2025 02:37:10 +0000 (0:00:00.395) 0:12:17.418 ********* 2025-05-14 02:38:13.253511 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-14 02:38:13.253517 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-14 02:38:13.253522 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.253528 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-14 02:38:13.253534 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-14 02:38:13.253540 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.253545 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-14 02:38:13.253551 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-14 02:38:13.253557 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.253563 | orchestrator | 2025-05-14 02:38:13.253568 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-14 02:38:13.253574 | orchestrator | Wednesday 14 May 2025 02:37:10 +0000 (0:00:00.429) 0:12:17.848 ********* 2025-05-14 02:38:13.253580 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.253619 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.253626 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.253631 | orchestrator | 2025-05-14 02:38:13.253637 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-14 02:38:13.253647 | orchestrator | Wednesday 14 May 2025 02:37:11 +0000 (0:00:00.421) 0:12:18.269 ********* 2025-05-14 02:38:13.253653 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.253659 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.253665 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.253672 | orchestrator | 2025-05-14 02:38:13.253677 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-14 02:38:13.253683 | orchestrator | Wednesday 14 May 2025 02:37:11 +0000 (0:00:00.717) 0:12:18.987 ********* 2025-05-14 02:38:13.253689 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.253695 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.253701 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.253707 | orchestrator | 2025-05-14 02:38:13.253714 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-14 02:38:13.253720 | orchestrator | Wednesday 14 May 2025 02:37:12 +0000 (0:00:00.363) 0:12:19.351 ********* 2025-05-14 02:38:13.253726 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.253733 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.253739 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.253745 | orchestrator | 2025-05-14 02:38:13.253751 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-14 02:38:13.253758 | orchestrator | Wednesday 14 May 2025 02:37:12 +0000 (0:00:00.339) 0:12:19.690 ********* 2025-05-14 02:38:13.253764 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.253770 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.253776 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.253782 | orchestrator | 2025-05-14 02:38:13.253786 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-14 02:38:13.253793 | orchestrator | Wednesday 14 May 2025 02:37:12 +0000 (0:00:00.353) 0:12:20.044 ********* 2025-05-14 02:38:13.253796 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.253800 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.253804 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.253808 | orchestrator | 2025-05-14 02:38:13.253811 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-14 02:38:13.253815 | orchestrator | Wednesday 14 May 2025 02:37:13 +0000 (0:00:00.657) 0:12:20.702 ********* 2025-05-14 02:38:13.253819 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:38:13.253823 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:38:13.253826 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:38:13.253830 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.253834 | orchestrator | 2025-05-14 02:38:13.253837 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-14 02:38:13.253841 | orchestrator | Wednesday 14 May 2025 02:37:14 +0000 (0:00:00.563) 0:12:21.265 ********* 2025-05-14 02:38:13.253845 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:38:13.253849 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:38:13.253852 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:38:13.253856 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.253860 | orchestrator | 2025-05-14 02:38:13.253864 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-14 02:38:13.253867 | orchestrator | Wednesday 14 May 2025 02:37:14 +0000 (0:00:00.461) 0:12:21.727 ********* 2025-05-14 02:38:13.253871 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:38:13.253875 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:38:13.253878 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:38:13.253886 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.253889 | orchestrator | 2025-05-14 02:38:13.253893 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 02:38:13.253897 | orchestrator | Wednesday 14 May 2025 02:37:14 +0000 (0:00:00.469) 0:12:22.196 ********* 2025-05-14 02:38:13.253901 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.253905 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.253908 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.253912 | orchestrator | 2025-05-14 02:38:13.253916 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-14 02:38:13.253920 | orchestrator | Wednesday 14 May 2025 02:37:15 +0000 (0:00:00.379) 0:12:22.576 ********* 2025-05-14 02:38:13.253923 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-14 02:38:13.253927 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.253931 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-14 02:38:13.253934 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.253938 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-14 02:38:13.253942 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.253945 | orchestrator | 2025-05-14 02:38:13.253949 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-14 02:38:13.253953 | orchestrator | Wednesday 14 May 2025 02:37:15 +0000 (0:00:00.445) 0:12:23.021 ********* 2025-05-14 02:38:13.253957 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.253960 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.253964 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.253968 | orchestrator | 2025-05-14 02:38:13.253971 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 02:38:13.253975 | orchestrator | Wednesday 14 May 2025 02:37:16 +0000 (0:00:00.594) 0:12:23.616 ********* 2025-05-14 02:38:13.253979 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.253982 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.253986 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.253990 | orchestrator | 2025-05-14 02:38:13.253993 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-14 02:38:13.253997 | orchestrator | Wednesday 14 May 2025 02:37:16 +0000 (0:00:00.326) 0:12:23.943 ********* 2025-05-14 02:38:13.254001 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-14 02:38:13.254005 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.254008 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-14 02:38:13.254034 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.254039 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-14 02:38:13.254042 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.254046 | orchestrator | 2025-05-14 02:38:13.254050 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-14 02:38:13.254053 | orchestrator | Wednesday 14 May 2025 02:37:17 +0000 (0:00:00.494) 0:12:24.437 ********* 2025-05-14 02:38:13.254057 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-14 02:38:13.254061 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.254065 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-14 02:38:13.254069 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.254072 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-14 02:38:13.254076 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.254080 | orchestrator | 2025-05-14 02:38:13.254084 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-14 02:38:13.254087 | orchestrator | Wednesday 14 May 2025 02:37:17 +0000 (0:00:00.331) 0:12:24.768 ********* 2025-05-14 02:38:13.254091 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:38:13.254098 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:38:13.254104 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:38:13.254108 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.254112 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-14 02:38:13.254115 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-14 02:38:13.254119 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-14 02:38:13.254123 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.254126 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-14 02:38:13.254130 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-14 02:38:13.254134 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-14 02:38:13.254137 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.254141 | orchestrator | 2025-05-14 02:38:13.254145 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-14 02:38:13.254149 | orchestrator | Wednesday 14 May 2025 02:37:18 +0000 (0:00:00.972) 0:12:25.741 ********* 2025-05-14 02:38:13.254152 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.254156 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.254160 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.254163 | orchestrator | 2025-05-14 02:38:13.254167 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-14 02:38:13.254171 | orchestrator | Wednesday 14 May 2025 02:37:19 +0000 (0:00:00.535) 0:12:26.276 ********* 2025-05-14 02:38:13.254175 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-14 02:38:13.254178 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.254182 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-14 02:38:13.254186 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.254189 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-14 02:38:13.254193 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.254197 | orchestrator | 2025-05-14 02:38:13.254201 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-14 02:38:13.254204 | orchestrator | Wednesday 14 May 2025 02:37:19 +0000 (0:00:00.901) 0:12:27.178 ********* 2025-05-14 02:38:13.254208 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.254212 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.254215 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.254219 | orchestrator | 2025-05-14 02:38:13.254223 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-14 02:38:13.254227 | orchestrator | Wednesday 14 May 2025 02:37:20 +0000 (0:00:00.568) 0:12:27.747 ********* 2025-05-14 02:38:13.254230 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.254234 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.254238 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.254241 | orchestrator | 2025-05-14 02:38:13.254245 | orchestrator | TASK [ceph-rgw : include common.yml] ******************************************* 2025-05-14 02:38:13.254249 | orchestrator | Wednesday 14 May 2025 02:37:21 +0000 (0:00:00.874) 0:12:28.621 ********* 2025-05-14 02:38:13.254252 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:38:13.254256 | orchestrator | 2025-05-14 02:38:13.254260 | orchestrator | TASK [ceph-rgw : create rados gateway directories] ***************************** 2025-05-14 02:38:13.254264 | orchestrator | Wednesday 14 May 2025 02:37:21 +0000 (0:00:00.519) 0:12:29.140 ********* 2025-05-14 02:38:13.254268 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2025-05-14 02:38:13.254271 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2025-05-14 02:38:13.254275 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2025-05-14 02:38:13.254279 | orchestrator | 2025-05-14 02:38:13.254283 | orchestrator | TASK [ceph-rgw : get keys from monitors] *************************************** 2025-05-14 02:38:13.254291 | orchestrator | Wednesday 14 May 2025 02:37:22 +0000 (0:00:01.010) 0:12:30.151 ********* 2025-05-14 02:38:13.254295 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 02:38:13.254298 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-14 02:38:13.254302 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-14 02:38:13.254306 | orchestrator | 2025-05-14 02:38:13.254310 | orchestrator | TASK [ceph-rgw : copy ceph key(s) if needed] *********************************** 2025-05-14 02:38:13.254313 | orchestrator | Wednesday 14 May 2025 02:37:24 +0000 (0:00:01.753) 0:12:31.904 ********* 2025-05-14 02:38:13.254320 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-14 02:38:13.254323 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-14 02:38:13.254327 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:38:13.254331 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-14 02:38:13.254335 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-14 02:38:13.254338 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:38:13.254342 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-14 02:38:13.254346 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-14 02:38:13.254350 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:38:13.254353 | orchestrator | 2025-05-14 02:38:13.254357 | orchestrator | TASK [ceph-rgw : copy SSL certificate & key data to certificate path] ********** 2025-05-14 02:38:13.254361 | orchestrator | Wednesday 14 May 2025 02:37:25 +0000 (0:00:01.236) 0:12:33.141 ********* 2025-05-14 02:38:13.254364 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.254368 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.254372 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.254376 | orchestrator | 2025-05-14 02:38:13.254379 | orchestrator | TASK [ceph-rgw : include_tasks pre_requisite.yml] ****************************** 2025-05-14 02:38:13.254383 | orchestrator | Wednesday 14 May 2025 02:37:26 +0000 (0:00:00.313) 0:12:33.455 ********* 2025-05-14 02:38:13.254387 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.254391 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.254394 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.254398 | orchestrator | 2025-05-14 02:38:13.254402 | orchestrator | TASK [ceph-rgw : rgw pool creation tasks] ************************************** 2025-05-14 02:38:13.254406 | orchestrator | Wednesday 14 May 2025 02:37:26 +0000 (0:00:00.626) 0:12:34.081 ********* 2025-05-14 02:38:13.254412 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-05-14 02:38:13.254416 | orchestrator | 2025-05-14 02:38:13.254419 | orchestrator | TASK [ceph-rgw : create ec profile] ******************************************** 2025-05-14 02:38:13.254423 | orchestrator | Wednesday 14 May 2025 02:37:27 +0000 (0:00:00.243) 0:12:34.325 ********* 2025-05-14 02:38:13.254427 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-14 02:38:13.254431 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-14 02:38:13.254435 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-14 02:38:13.254439 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-14 02:38:13.254443 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-14 02:38:13.254447 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.254451 | orchestrator | 2025-05-14 02:38:13.254454 | orchestrator | TASK [ceph-rgw : set crush rule] *********************************************** 2025-05-14 02:38:13.254458 | orchestrator | Wednesday 14 May 2025 02:37:27 +0000 (0:00:00.674) 0:12:34.999 ********* 2025-05-14 02:38:13.254462 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-14 02:38:13.254470 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-14 02:38:13.254474 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-14 02:38:13.254477 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-14 02:38:13.254481 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-14 02:38:13.254485 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.254489 | orchestrator | 2025-05-14 02:38:13.254492 | orchestrator | TASK [ceph-rgw : create ec pools for rgw] ************************************** 2025-05-14 02:38:13.254496 | orchestrator | Wednesday 14 May 2025 02:37:28 +0000 (0:00:00.919) 0:12:35.919 ********* 2025-05-14 02:38:13.254500 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-14 02:38:13.254504 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-14 02:38:13.254507 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-14 02:38:13.254511 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-14 02:38:13.254515 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-14 02:38:13.254519 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.254522 | orchestrator | 2025-05-14 02:38:13.254526 | orchestrator | TASK [ceph-rgw : create replicated pools for rgw] ****************************** 2025-05-14 02:38:13.254530 | orchestrator | Wednesday 14 May 2025 02:37:29 +0000 (0:00:00.972) 0:12:36.891 ********* 2025-05-14 02:38:13.254536 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-14 02:38:13.254541 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-14 02:38:13.254545 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-14 02:38:13.254548 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-14 02:38:13.254552 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-14 02:38:13.254556 | orchestrator | 2025-05-14 02:38:13.254559 | orchestrator | TASK [ceph-rgw : include_tasks openstack-keystone.yml] ************************* 2025-05-14 02:38:13.254563 | orchestrator | Wednesday 14 May 2025 02:37:54 +0000 (0:00:24.791) 0:13:01.683 ********* 2025-05-14 02:38:13.254567 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.254571 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.254574 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.254578 | orchestrator | 2025-05-14 02:38:13.254584 | orchestrator | TASK [ceph-rgw : include_tasks start_radosgw.yml] ****************************** 2025-05-14 02:38:13.254600 | orchestrator | Wednesday 14 May 2025 02:37:54 +0000 (0:00:00.458) 0:13:02.141 ********* 2025-05-14 02:38:13.254604 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.254607 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.254611 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.254618 | orchestrator | 2025-05-14 02:38:13.254622 | orchestrator | TASK [ceph-rgw : include start_docker_rgw.yml] ********************************* 2025-05-14 02:38:13.254626 | orchestrator | Wednesday 14 May 2025 02:37:55 +0000 (0:00:00.283) 0:13:02.424 ********* 2025-05-14 02:38:13.254630 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:38:13.254633 | orchestrator | 2025-05-14 02:38:13.254637 | orchestrator | TASK [ceph-rgw : include_task systemd.yml] ************************************* 2025-05-14 02:38:13.254641 | orchestrator | Wednesday 14 May 2025 02:37:55 +0000 (0:00:00.517) 0:13:02.941 ********* 2025-05-14 02:38:13.254644 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:38:13.254648 | orchestrator | 2025-05-14 02:38:13.254652 | orchestrator | TASK [ceph-rgw : generate systemd unit file] *********************************** 2025-05-14 02:38:13.254655 | orchestrator | Wednesday 14 May 2025 02:37:56 +0000 (0:00:00.636) 0:13:03.578 ********* 2025-05-14 02:38:13.254659 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:38:13.254663 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:38:13.254666 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:38:13.254670 | orchestrator | 2025-05-14 02:38:13.254674 | orchestrator | TASK [ceph-rgw : generate systemd ceph-radosgw target file] ******************** 2025-05-14 02:38:13.254678 | orchestrator | Wednesday 14 May 2025 02:37:57 +0000 (0:00:01.158) 0:13:04.737 ********* 2025-05-14 02:38:13.254683 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:38:13.254689 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:38:13.254695 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:38:13.254701 | orchestrator | 2025-05-14 02:38:13.254707 | orchestrator | TASK [ceph-rgw : enable ceph-radosgw.target] *********************************** 2025-05-14 02:38:13.254714 | orchestrator | Wednesday 14 May 2025 02:37:58 +0000 (0:00:01.109) 0:13:05.846 ********* 2025-05-14 02:38:13.254720 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:38:13.254726 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:38:13.254733 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:38:13.254737 | orchestrator | 2025-05-14 02:38:13.254741 | orchestrator | TASK [ceph-rgw : systemd start rgw container] ********************************** 2025-05-14 02:38:13.254744 | orchestrator | Wednesday 14 May 2025 02:38:00 +0000 (0:00:01.864) 0:13:07.711 ********* 2025-05-14 02:38:13.254748 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-14 02:38:13.254752 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-14 02:38:13.254756 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-14 02:38:13.254759 | orchestrator | 2025-05-14 02:38:13.254763 | orchestrator | TASK [ceph-rgw : include_tasks multisite/main.yml] ***************************** 2025-05-14 02:38:13.254767 | orchestrator | Wednesday 14 May 2025 02:38:02 +0000 (0:00:01.904) 0:13:09.615 ********* 2025-05-14 02:38:13.254770 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.254774 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:38:13.254778 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:38:13.254782 | orchestrator | 2025-05-14 02:38:13.254785 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-14 02:38:13.254789 | orchestrator | Wednesday 14 May 2025 02:38:03 +0000 (0:00:01.234) 0:13:10.850 ********* 2025-05-14 02:38:13.254793 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:38:13.254796 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:38:13.254800 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:38:13.254804 | orchestrator | 2025-05-14 02:38:13.254807 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-05-14 02:38:13.254811 | orchestrator | Wednesday 14 May 2025 02:38:04 +0000 (0:00:00.676) 0:13:11.526 ********* 2025-05-14 02:38:13.254818 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:38:13.254828 | orchestrator | 2025-05-14 02:38:13.254831 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-05-14 02:38:13.254835 | orchestrator | Wednesday 14 May 2025 02:38:05 +0000 (0:00:00.775) 0:13:12.302 ********* 2025-05-14 02:38:13.254839 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.254843 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.254846 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.254850 | orchestrator | 2025-05-14 02:38:13.254854 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-05-14 02:38:13.254857 | orchestrator | Wednesday 14 May 2025 02:38:05 +0000 (0:00:00.325) 0:13:12.628 ********* 2025-05-14 02:38:13.254861 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:38:13.254865 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:38:13.254868 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:38:13.254872 | orchestrator | 2025-05-14 02:38:13.254876 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-05-14 02:38:13.254879 | orchestrator | Wednesday 14 May 2025 02:38:06 +0000 (0:00:01.182) 0:13:13.811 ********* 2025-05-14 02:38:13.254883 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:38:13.254887 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:38:13.254890 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:38:13.254894 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:38:13.254898 | orchestrator | 2025-05-14 02:38:13.254901 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-05-14 02:38:13.254905 | orchestrator | Wednesday 14 May 2025 02:38:07 +0000 (0:00:01.045) 0:13:14.857 ********* 2025-05-14 02:38:13.254912 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:38:13.254915 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:38:13.254919 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:38:13.254923 | orchestrator | 2025-05-14 02:38:13.254926 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-14 02:38:13.254930 | orchestrator | Wednesday 14 May 2025 02:38:07 +0000 (0:00:00.352) 0:13:15.209 ********* 2025-05-14 02:38:13.254934 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:38:13.254938 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:38:13.254941 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:38:13.254945 | orchestrator | 2025-05-14 02:38:13.254949 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:38:13.254952 | orchestrator | testbed-node-0 : ok=131  changed=38  unreachable=0 failed=0 skipped=291  rescued=0 ignored=0 2025-05-14 02:38:13.254957 | orchestrator | testbed-node-1 : ok=119  changed=34  unreachable=0 failed=0 skipped=262  rescued=0 ignored=0 2025-05-14 02:38:13.254960 | orchestrator | testbed-node-2 : ok=126  changed=36  unreachable=0 failed=0 skipped=261  rescued=0 ignored=0 2025-05-14 02:38:13.254964 | orchestrator | testbed-node-3 : ok=175  changed=47  unreachable=0 failed=0 skipped=347  rescued=0 ignored=0 2025-05-14 02:38:13.254968 | orchestrator | testbed-node-4 : ok=164  changed=43  unreachable=0 failed=0 skipped=309  rescued=0 ignored=0 2025-05-14 02:38:13.254972 | orchestrator | testbed-node-5 : ok=166  changed=44  unreachable=0 failed=0 skipped=307  rescued=0 ignored=0 2025-05-14 02:38:13.254975 | orchestrator | 2025-05-14 02:38:13.254979 | orchestrator | 2025-05-14 02:38:13.254983 | orchestrator | 2025-05-14 02:38:13.254986 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:38:13.254990 | orchestrator | Wednesday 14 May 2025 02:38:09 +0000 (0:00:01.544) 0:13:16.754 ********* 2025-05-14 02:38:13.254999 | orchestrator | =============================================================================== 2025-05-14 02:38:13.255002 | orchestrator | ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:17.2.7 image -- 46.96s 2025-05-14 02:38:13.255006 | orchestrator | ceph-osd : use ceph-volume to create bluestore osds -------------------- 40.29s 2025-05-14 02:38:13.255010 | orchestrator | ceph-rgw : create replicated pools for rgw ----------------------------- 24.79s 2025-05-14 02:38:13.255013 | orchestrator | ceph-mon : waiting for the monitor(s) to form the quorum... ------------ 21.51s 2025-05-14 02:38:13.255017 | orchestrator | ceph-mds : wait for mds socket to exist -------------------------------- 17.05s 2025-05-14 02:38:13.255021 | orchestrator | ceph-mgr : wait for all mgr to be up ----------------------------------- 13.45s 2025-05-14 02:38:13.255024 | orchestrator | ceph-osd : wait for all osd to be up ----------------------------------- 12.61s 2025-05-14 02:38:13.255028 | orchestrator | ceph-mgr : create ceph mgr keyring(s) on a mon node --------------------- 8.38s 2025-05-14 02:38:13.255031 | orchestrator | ceph-mon : fetch ceph initial keys -------------------------------------- 7.61s 2025-05-14 02:38:13.255035 | orchestrator | ceph-mds : create filesystem pools -------------------------------------- 6.90s 2025-05-14 02:38:13.255039 | orchestrator | ceph-mgr : disable ceph mgr enabled modules ----------------------------- 6.23s 2025-05-14 02:38:13.255042 | orchestrator | ceph-config : create ceph initial directories --------------------------- 6.17s 2025-05-14 02:38:13.255046 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 5.42s 2025-05-14 02:38:13.255050 | orchestrator | ceph-mgr : add modules to ceph-mgr -------------------------------------- 4.93s 2025-05-14 02:38:13.255053 | orchestrator | ceph-config : generate ceph.conf configuration file --------------------- 4.68s 2025-05-14 02:38:13.255060 | orchestrator | ceph-crash : start the ceph-crash service ------------------------------- 4.32s 2025-05-14 02:38:13.255063 | orchestrator | ceph-handler : remove tempdir for scripts ------------------------------- 4.09s 2025-05-14 02:38:13.255067 | orchestrator | ceph-osd : systemd start osd -------------------------------------------- 3.53s 2025-05-14 02:38:13.255071 | orchestrator | ceph-crash : create client.crash keyring -------------------------------- 3.47s 2025-05-14 02:38:13.255074 | orchestrator | ceph-mds : create ceph filesystem --------------------------------------- 3.11s 2025-05-14 02:38:13.255078 | orchestrator | 2025-05-14 02:38:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:38:16.277860 | orchestrator | 2025-05-14 02:38:16 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:38:16.280055 | orchestrator | 2025-05-14 02:38:16 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:38:16.281099 | orchestrator | 2025-05-14 02:38:16 | INFO  | Task 753cab8f-c082-417f-8245-f75869be1e7a is in state STARTED 2025-05-14 02:38:16.282992 | orchestrator | 2025-05-14 02:38:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:38:19.319073 | orchestrator | 2025-05-14 02:38:19 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:38:19.320827 | orchestrator | 2025-05-14 02:38:19 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:38:19.323224 | orchestrator | 2025-05-14 02:38:19 | INFO  | Task 753cab8f-c082-417f-8245-f75869be1e7a is in state STARTED 2025-05-14 02:38:19.323290 | orchestrator | 2025-05-14 02:38:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:38:22.358454 | orchestrator | 2025-05-14 02:38:22 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:38:22.358841 | orchestrator | 2025-05-14 02:38:22 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state STARTED 2025-05-14 02:38:22.359767 | orchestrator | 2025-05-14 02:38:22 | INFO  | Task 753cab8f-c082-417f-8245-f75869be1e7a is in state STARTED 2025-05-14 02:38:22.359810 | orchestrator | 2025-05-14 02:38:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:38:25.407127 | orchestrator | 2025-05-14 02:38:25 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:38:25.407636 | orchestrator | 2025-05-14 02:38:25 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:38:25.411694 | orchestrator | 2025-05-14 02:38:25.411752 | orchestrator | 2025-05-14 02:38:25.411767 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-05-14 02:38:25.411779 | orchestrator | 2025-05-14 02:38:25.411789 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-05-14 02:38:25.411800 | orchestrator | Wednesday 14 May 2025 02:34:53 +0000 (0:00:00.194) 0:00:00.194 ********* 2025-05-14 02:38:25.411811 | orchestrator | ok: [localhost] => { 2025-05-14 02:38:25.411822 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-05-14 02:38:25.411831 | orchestrator | } 2025-05-14 02:38:25.411841 | orchestrator | 2025-05-14 02:38:25.411850 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-05-14 02:38:25.411859 | orchestrator | Wednesday 14 May 2025 02:34:53 +0000 (0:00:00.068) 0:00:00.262 ********* 2025-05-14 02:38:25.411869 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-05-14 02:38:25.411881 | orchestrator | ...ignoring 2025-05-14 02:38:25.411890 | orchestrator | 2025-05-14 02:38:25.411900 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-05-14 02:38:25.411910 | orchestrator | Wednesday 14 May 2025 02:34:56 +0000 (0:00:02.554) 0:00:02.817 ********* 2025-05-14 02:38:25.411921 | orchestrator | skipping: [localhost] 2025-05-14 02:38:25.411931 | orchestrator | 2025-05-14 02:38:25.411942 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-05-14 02:38:25.411952 | orchestrator | Wednesday 14 May 2025 02:34:56 +0000 (0:00:00.071) 0:00:02.889 ********* 2025-05-14 02:38:25.411963 | orchestrator | ok: [localhost] 2025-05-14 02:38:25.411973 | orchestrator | 2025-05-14 02:38:25.411984 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:38:25.411994 | orchestrator | 2025-05-14 02:38:25.412000 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:38:25.412007 | orchestrator | Wednesday 14 May 2025 02:34:56 +0000 (0:00:00.141) 0:00:03.031 ********* 2025-05-14 02:38:25.412013 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:25.412019 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:25.412026 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:25.412032 | orchestrator | 2025-05-14 02:38:25.412038 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:38:25.412109 | orchestrator | Wednesday 14 May 2025 02:34:56 +0000 (0:00:00.305) 0:00:03.336 ********* 2025-05-14 02:38:25.412116 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-05-14 02:38:25.412123 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-05-14 02:38:25.412130 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-05-14 02:38:25.412136 | orchestrator | 2025-05-14 02:38:25.412143 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-05-14 02:38:25.412150 | orchestrator | 2025-05-14 02:38:25.412156 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-05-14 02:38:25.412162 | orchestrator | Wednesday 14 May 2025 02:34:56 +0000 (0:00:00.384) 0:00:03.720 ********* 2025-05-14 02:38:25.412168 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 02:38:25.412175 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-14 02:38:25.412181 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-14 02:38:25.412187 | orchestrator | 2025-05-14 02:38:25.412295 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-14 02:38:25.412307 | orchestrator | Wednesday 14 May 2025 02:34:57 +0000 (0:00:00.488) 0:00:04.208 ********* 2025-05-14 02:38:25.412336 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:38:25.412345 | orchestrator | 2025-05-14 02:38:25.412352 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-05-14 02:38:25.412359 | orchestrator | Wednesday 14 May 2025 02:34:58 +0000 (0:00:00.637) 0:00:04.846 ********* 2025-05-14 02:38:25.412400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-14 02:38:25.412412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-14 02:38:25.412427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-14 02:38:25.412439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-14 02:38:25.412453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-14 02:38:25.412464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-14 02:38:25.412475 | orchestrator | 2025-05-14 02:38:25.412485 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-05-14 02:38:25.412495 | orchestrator | Wednesday 14 May 2025 02:35:02 +0000 (0:00:04.229) 0:00:09.075 ********* 2025-05-14 02:38:25.412511 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:25.412522 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:25.412533 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:38:25.412543 | orchestrator | 2025-05-14 02:38:25.412553 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-05-14 02:38:25.412562 | orchestrator | Wednesday 14 May 2025 02:35:03 +0000 (0:00:00.795) 0:00:09.871 ********* 2025-05-14 02:38:25.412572 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:25.412583 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:25.412667 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:38:25.412674 | orchestrator | 2025-05-14 02:38:25.412680 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-05-14 02:38:25.412686 | orchestrator | Wednesday 14 May 2025 02:35:04 +0000 (0:00:01.570) 0:00:11.441 ********* 2025-05-14 02:38:25.412706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-14 02:38:25.412715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-14 02:38:25.412732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-14 02:38:25.412744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-14 02:38:25.412751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-14 02:38:25.412758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-14 02:38:25.412769 | orchestrator | 2025-05-14 02:38:25.412775 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-05-14 02:38:25.412781 | orchestrator | Wednesday 14 May 2025 02:35:11 +0000 (0:00:06.494) 0:00:17.935 ********* 2025-05-14 02:38:25.412787 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:25.412793 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:25.412799 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:38:25.412805 | orchestrator | 2025-05-14 02:38:25.412812 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-05-14 02:38:25.412818 | orchestrator | Wednesday 14 May 2025 02:35:12 +0000 (0:00:01.089) 0:00:19.025 ********* 2025-05-14 02:38:25.412824 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:38:25.412830 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:38:25.412836 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:38:25.412842 | orchestrator | 2025-05-14 02:38:25.412848 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-05-14 02:38:25.412855 | orchestrator | Wednesday 14 May 2025 02:35:19 +0000 (0:00:06.804) 0:00:25.830 ********* 2025-05-14 02:38:25.412870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-14 02:38:25.412878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-14 02:38:25.412898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-14 02:38:25.412910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-14 02:38:25.412917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-14 02:38:25.412933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-14 02:38:25.412944 | orchestrator | 2025-05-14 02:38:25.412954 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-05-14 02:38:25.412964 | orchestrator | Wednesday 14 May 2025 02:35:23 +0000 (0:00:04.609) 0:00:30.439 ********* 2025-05-14 02:38:25.412974 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:38:25.412983 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:38:25.412992 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:38:25.413002 | orchestrator | 2025-05-14 02:38:25.413013 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-05-14 02:38:25.413023 | orchestrator | Wednesday 14 May 2025 02:35:24 +0000 (0:00:01.179) 0:00:31.619 ********* 2025-05-14 02:38:25.413034 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:25.413045 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:25.413055 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:25.413066 | orchestrator | 2025-05-14 02:38:25.413072 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-05-14 02:38:25.413079 | orchestrator | Wednesday 14 May 2025 02:35:25 +0000 (0:00:00.494) 0:00:32.114 ********* 2025-05-14 02:38:25.413085 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:25.413091 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:25.413097 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:25.413103 | orchestrator | 2025-05-14 02:38:25.413109 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-05-14 02:38:25.413115 | orchestrator | Wednesday 14 May 2025 02:35:25 +0000 (0:00:00.448) 0:00:32.563 ********* 2025-05-14 02:38:25.413122 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-05-14 02:38:25.413128 | orchestrator | ...ignoring 2025-05-14 02:38:25.413134 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-05-14 02:38:25.413141 | orchestrator | ...ignoring 2025-05-14 02:38:25.413151 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-05-14 02:38:25.413157 | orchestrator | ...ignoring 2025-05-14 02:38:25.413163 | orchestrator | 2025-05-14 02:38:25.413169 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-05-14 02:38:25.413176 | orchestrator | Wednesday 14 May 2025 02:35:37 +0000 (0:00:11.284) 0:00:43.848 ********* 2025-05-14 02:38:25.413182 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:25.413188 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:25.413194 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:25.413200 | orchestrator | 2025-05-14 02:38:25.413206 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-05-14 02:38:25.413215 | orchestrator | Wednesday 14 May 2025 02:35:37 +0000 (0:00:00.511) 0:00:44.360 ********* 2025-05-14 02:38:25.413226 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:25.413236 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:25.413246 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:25.413256 | orchestrator | 2025-05-14 02:38:25.413265 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-05-14 02:38:25.413283 | orchestrator | Wednesday 14 May 2025 02:35:38 +0000 (0:00:00.421) 0:00:44.782 ********* 2025-05-14 02:38:25.413294 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:25.413304 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:25.413315 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:25.413322 | orchestrator | 2025-05-14 02:38:25.413333 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-05-14 02:38:25.413340 | orchestrator | Wednesday 14 May 2025 02:35:38 +0000 (0:00:00.377) 0:00:45.159 ********* 2025-05-14 02:38:25.413346 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:25.413352 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:25.413358 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:25.413364 | orchestrator | 2025-05-14 02:38:25.413370 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-05-14 02:38:25.413377 | orchestrator | Wednesday 14 May 2025 02:35:38 +0000 (0:00:00.489) 0:00:45.648 ********* 2025-05-14 02:38:25.413383 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:25.413389 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:25.413395 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:25.413401 | orchestrator | 2025-05-14 02:38:25.413408 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-05-14 02:38:25.413414 | orchestrator | Wednesday 14 May 2025 02:35:39 +0000 (0:00:00.605) 0:00:46.254 ********* 2025-05-14 02:38:25.413420 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:25.413426 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:25.413432 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:25.413438 | orchestrator | 2025-05-14 02:38:25.413444 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-14 02:38:25.413454 | orchestrator | Wednesday 14 May 2025 02:35:40 +0000 (0:00:00.510) 0:00:46.764 ********* 2025-05-14 02:38:25.413464 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:25.413475 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:25.413484 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-05-14 02:38:25.413494 | orchestrator | 2025-05-14 02:38:25.413505 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-05-14 02:38:25.413516 | orchestrator | Wednesday 14 May 2025 02:35:40 +0000 (0:00:00.499) 0:00:47.264 ********* 2025-05-14 02:38:25.413526 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:38:25.413536 | orchestrator | 2025-05-14 02:38:25.413546 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-05-14 02:38:25.413554 | orchestrator | Wednesday 14 May 2025 02:35:51 +0000 (0:00:11.173) 0:00:58.437 ********* 2025-05-14 02:38:25.413560 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:25.413566 | orchestrator | 2025-05-14 02:38:25.413573 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-14 02:38:25.413579 | orchestrator | Wednesday 14 May 2025 02:35:51 +0000 (0:00:00.126) 0:00:58.564 ********* 2025-05-14 02:38:25.413604 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:25.413612 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:25.413618 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:25.413624 | orchestrator | 2025-05-14 02:38:25.413630 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-05-14 02:38:25.413636 | orchestrator | Wednesday 14 May 2025 02:35:52 +0000 (0:00:01.038) 0:00:59.602 ********* 2025-05-14 02:38:25.413642 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:38:25.413649 | orchestrator | 2025-05-14 02:38:25.413655 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-05-14 02:38:25.413661 | orchestrator | Wednesday 14 May 2025 02:36:03 +0000 (0:00:10.975) 0:01:10.578 ********* 2025-05-14 02:38:25.413667 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for first MariaDB service port liveness (10 retries left). 2025-05-14 02:38:25.413674 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:25.413686 | orchestrator | 2025-05-14 02:38:25.413692 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-05-14 02:38:25.413698 | orchestrator | Wednesday 14 May 2025 02:36:11 +0000 (0:00:07.157) 0:01:17.735 ********* 2025-05-14 02:38:25.413705 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:25.413711 | orchestrator | 2025-05-14 02:38:25.413717 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-05-14 02:38:25.413723 | orchestrator | Wednesday 14 May 2025 02:36:13 +0000 (0:00:02.800) 0:01:20.536 ********* 2025-05-14 02:38:25.413729 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:38:25.413735 | orchestrator | 2025-05-14 02:38:25.413742 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-05-14 02:38:25.413748 | orchestrator | Wednesday 14 May 2025 02:36:13 +0000 (0:00:00.101) 0:01:20.637 ********* 2025-05-14 02:38:25.413754 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:25.413760 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:25.413766 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:25.413772 | orchestrator | 2025-05-14 02:38:25.413778 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-05-14 02:38:25.413784 | orchestrator | Wednesday 14 May 2025 02:36:14 +0000 (0:00:00.483) 0:01:21.121 ********* 2025-05-14 02:38:25.413790 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:25.413801 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:38:25.413807 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:38:25.413814 | orchestrator | 2025-05-14 02:38:25.413820 | orchestrator | RUNNING HANDLER [mariadb : Restart mariadb-clustercheck container] ************* 2025-05-14 02:38:25.413826 | orchestrator | Wednesday 14 May 2025 02:36:14 +0000 (0:00:00.472) 0:01:21.594 ********* 2025-05-14 02:38:25.413832 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-05-14 02:38:25.413838 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:38:25.413844 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:38:25.413850 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:38:25.413857 | orchestrator | 2025-05-14 02:38:25.413863 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-05-14 02:38:25.413869 | orchestrator | skipping: no hosts matched 2025-05-14 02:38:25.413875 | orchestrator | 2025-05-14 02:38:25.413881 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-14 02:38:25.413887 | orchestrator | 2025-05-14 02:38:25.413893 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-14 02:38:25.413899 | orchestrator | Wednesday 14 May 2025 02:36:30 +0000 (0:00:15.136) 0:01:36.730 ********* 2025-05-14 02:38:25.413905 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:38:25.413914 | orchestrator | 2025-05-14 02:38:25.413930 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-14 02:38:25.413941 | orchestrator | Wednesday 14 May 2025 02:36:50 +0000 (0:00:20.695) 0:01:57.426 ********* 2025-05-14 02:38:25.413951 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:25.413961 | orchestrator | 2025-05-14 02:38:25.413971 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-14 02:38:25.413980 | orchestrator | Wednesday 14 May 2025 02:37:06 +0000 (0:00:15.542) 0:02:12.969 ********* 2025-05-14 02:38:25.413990 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:25.414001 | orchestrator | 2025-05-14 02:38:25.414011 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-14 02:38:25.414071 | orchestrator | 2025-05-14 02:38:25.414078 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-14 02:38:25.414085 | orchestrator | Wednesday 14 May 2025 02:37:08 +0000 (0:00:02.669) 0:02:15.639 ********* 2025-05-14 02:38:25.414091 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:38:25.414097 | orchestrator | 2025-05-14 02:38:25.414103 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-14 02:38:25.414109 | orchestrator | Wednesday 14 May 2025 02:37:26 +0000 (0:00:17.220) 0:02:32.859 ********* 2025-05-14 02:38:25.414115 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:25.414128 | orchestrator | 2025-05-14 02:38:25.414134 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-14 02:38:25.414140 | orchestrator | Wednesday 14 May 2025 02:37:46 +0000 (0:00:20.543) 0:02:53.403 ********* 2025-05-14 02:38:25.414146 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:25.414153 | orchestrator | 2025-05-14 02:38:25.414159 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-05-14 02:38:25.414165 | orchestrator | 2025-05-14 02:38:25.414171 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-14 02:38:25.414177 | orchestrator | Wednesday 14 May 2025 02:37:49 +0000 (0:00:02.657) 0:02:56.060 ********* 2025-05-14 02:38:25.414184 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:38:25.414190 | orchestrator | 2025-05-14 02:38:25.414196 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-14 02:38:25.414202 | orchestrator | Wednesday 14 May 2025 02:38:01 +0000 (0:00:12.404) 0:03:08.464 ********* 2025-05-14 02:38:25.414209 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:25.414220 | orchestrator | 2025-05-14 02:38:25.414230 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-14 02:38:25.414241 | orchestrator | Wednesday 14 May 2025 02:38:06 +0000 (0:00:04.513) 0:03:12.978 ********* 2025-05-14 02:38:25.414251 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:25.414261 | orchestrator | 2025-05-14 02:38:25.414270 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-05-14 02:38:25.414281 | orchestrator | 2025-05-14 02:38:25.414292 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-05-14 02:38:25.414302 | orchestrator | Wednesday 14 May 2025 02:38:09 +0000 (0:00:03.002) 0:03:15.981 ********* 2025-05-14 02:38:25.414312 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:38:25.414323 | orchestrator | 2025-05-14 02:38:25.414330 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-05-14 02:38:25.414336 | orchestrator | Wednesday 14 May 2025 02:38:10 +0000 (0:00:00.787) 0:03:16.768 ********* 2025-05-14 02:38:25.414342 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:25.414348 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:25.414354 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:38:25.414360 | orchestrator | 2025-05-14 02:38:25.414366 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-05-14 02:38:25.414372 | orchestrator | Wednesday 14 May 2025 02:38:12 +0000 (0:00:02.578) 0:03:19.347 ********* 2025-05-14 02:38:25.414378 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:25.414384 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:25.414390 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:38:25.414396 | orchestrator | 2025-05-14 02:38:25.414402 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-05-14 02:38:25.414409 | orchestrator | Wednesday 14 May 2025 02:38:14 +0000 (0:00:02.185) 0:03:21.532 ********* 2025-05-14 02:38:25.414415 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:25.414421 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:25.414427 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:38:25.414433 | orchestrator | 2025-05-14 02:38:25.414439 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-05-14 02:38:25.414445 | orchestrator | Wednesday 14 May 2025 02:38:17 +0000 (0:00:02.441) 0:03:23.973 ********* 2025-05-14 02:38:25.414451 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:25.414457 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:25.414463 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:38:25.414469 | orchestrator | 2025-05-14 02:38:25.414475 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-05-14 02:38:25.414490 | orchestrator | Wednesday 14 May 2025 02:38:19 +0000 (0:00:02.215) 0:03:26.189 ********* 2025-05-14 02:38:25.414500 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:38:25.414511 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:38:25.414528 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:38:25.414538 | orchestrator | 2025-05-14 02:38:25.414548 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-05-14 02:38:25.414558 | orchestrator | Wednesday 14 May 2025 02:38:22 +0000 (0:00:03.024) 0:03:29.214 ********* 2025-05-14 02:38:25.414568 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:38:25.414579 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:38:25.414616 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:38:25.414626 | orchestrator | 2025-05-14 02:38:25.414632 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:38:25.414639 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-05-14 02:38:25.414646 | orchestrator | testbed-node-0 : ok=34  changed=17  unreachable=0 failed=0 skipped=8  rescued=0 ignored=1  2025-05-14 02:38:25.414661 | orchestrator | testbed-node-1 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-05-14 02:38:25.414668 | orchestrator | testbed-node-2 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-05-14 02:38:25.414674 | orchestrator | 2025-05-14 02:38:25.414680 | orchestrator | 2025-05-14 02:38:25.414687 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:38:25.414693 | orchestrator | Wednesday 14 May 2025 02:38:22 +0000 (0:00:00.363) 0:03:29.577 ********* 2025-05-14 02:38:25.414699 | orchestrator | =============================================================================== 2025-05-14 02:38:25.414705 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 37.92s 2025-05-14 02:38:25.414712 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 36.09s 2025-05-14 02:38:25.414718 | orchestrator | mariadb : Restart mariadb-clustercheck container ----------------------- 15.14s 2025-05-14 02:38:25.414724 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.40s 2025-05-14 02:38:25.414730 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.28s 2025-05-14 02:38:25.414736 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 11.17s 2025-05-14 02:38:25.414743 | orchestrator | mariadb : Starting first MariaDB container ----------------------------- 10.98s 2025-05-14 02:38:25.414749 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 7.16s 2025-05-14 02:38:25.414755 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 6.80s 2025-05-14 02:38:25.414761 | orchestrator | mariadb : Copying over config.json files for services ------------------- 6.49s 2025-05-14 02:38:25.414767 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.33s 2025-05-14 02:38:25.414773 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 4.61s 2025-05-14 02:38:25.414780 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.51s 2025-05-14 02:38:25.414786 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 4.23s 2025-05-14 02:38:25.414792 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.02s 2025-05-14 02:38:25.414798 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 3.00s 2025-05-14 02:38:25.414804 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.80s 2025-05-14 02:38:25.414810 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.58s 2025-05-14 02:38:25.414816 | orchestrator | Check MariaDB service --------------------------------------------------- 2.55s 2025-05-14 02:38:25.414823 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.44s 2025-05-14 02:38:25.414829 | orchestrator | 2025-05-14 02:38:25 | INFO  | Task a6bbd42a-a9b3-4d8e-8764-6dd02f84ecef is in state SUCCESS 2025-05-14 02:38:25.414842 | orchestrator | 2025-05-14 02:38:25 | INFO  | Task 753cab8f-c082-417f-8245-f75869be1e7a is in state STARTED 2025-05-14 02:38:25.414848 | orchestrator | 2025-05-14 02:38:25 | INFO  | Task 58bf8750-3b20-48bf-8efc-c974ee030ec3 is in state STARTED 2025-05-14 02:38:25.414855 | orchestrator | 2025-05-14 02:38:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:38:28.456343 | orchestrator | 2025-05-14 02:38:28 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:38:28.456459 | orchestrator | 2025-05-14 02:38:28 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:38:28.456478 | orchestrator | 2025-05-14 02:38:28 | INFO  | Task 753cab8f-c082-417f-8245-f75869be1e7a is in state STARTED 2025-05-14 02:38:28.456493 | orchestrator | 2025-05-14 02:38:28 | INFO  | Task 58bf8750-3b20-48bf-8efc-c974ee030ec3 is in state STARTED 2025-05-14 02:38:28.456507 | orchestrator | 2025-05-14 02:38:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:38:31.497276 | orchestrator | 2025-05-14 02:38:31 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:38:31.499906 | orchestrator | 2025-05-14 02:38:31 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:38:31.500639 | orchestrator | 2025-05-14 02:38:31 | INFO  | Task 753cab8f-c082-417f-8245-f75869be1e7a is in state STARTED 2025-05-14 02:38:31.502478 | orchestrator | 2025-05-14 02:38:31 | INFO  | Task 58bf8750-3b20-48bf-8efc-c974ee030ec3 is in state STARTED 2025-05-14 02:38:31.502567 | orchestrator | 2025-05-14 02:38:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:38:34.545563 | orchestrator | 2025-05-14 02:38:34 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:38:34.545691 | orchestrator | 2025-05-14 02:38:34 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:38:34.545701 | orchestrator | 2025-05-14 02:38:34 | INFO  | Task 753cab8f-c082-417f-8245-f75869be1e7a is in state STARTED 2025-05-14 02:38:34.546659 | orchestrator | 2025-05-14 02:38:34 | INFO  | Task 58bf8750-3b20-48bf-8efc-c974ee030ec3 is in state STARTED 2025-05-14 02:38:34.546703 | orchestrator | 2025-05-14 02:38:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:38:37.574207 | orchestrator | 2025-05-14 02:38:37 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:38:37.574430 | orchestrator | 2025-05-14 02:38:37 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:38:37.576065 | orchestrator | 2025-05-14 02:38:37 | INFO  | Task 753cab8f-c082-417f-8245-f75869be1e7a is in state STARTED 2025-05-14 02:38:37.576646 | orchestrator | 2025-05-14 02:38:37 | INFO  | Task 58bf8750-3b20-48bf-8efc-c974ee030ec3 is in state STARTED 2025-05-14 02:38:37.576676 | orchestrator | 2025-05-14 02:38:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:38:40.619905 | orchestrator | 2025-05-14 02:38:40 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:38:40.621186 | orchestrator | 2025-05-14 02:38:40 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:38:40.625006 | orchestrator | 2025-05-14 02:38:40 | INFO  | Task 753cab8f-c082-417f-8245-f75869be1e7a is in state STARTED 2025-05-14 02:38:40.625525 | orchestrator | 2025-05-14 02:38:40 | INFO  | Task 58bf8750-3b20-48bf-8efc-c974ee030ec3 is in state STARTED 2025-05-14 02:38:40.625557 | orchestrator | 2025-05-14 02:38:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:38:43.669924 | orchestrator | 2025-05-14 02:38:43 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:38:43.670192 | orchestrator | 2025-05-14 02:38:43 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:38:43.671284 | orchestrator | 2025-05-14 02:38:43 | INFO  | Task 753cab8f-c082-417f-8245-f75869be1e7a is in state STARTED 2025-05-14 02:38:43.672151 | orchestrator | 2025-05-14 02:38:43 | INFO  | Task 58bf8750-3b20-48bf-8efc-c974ee030ec3 is in state STARTED 2025-05-14 02:38:43.672180 | orchestrator | 2025-05-14 02:38:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:38:46.723805 | orchestrator | 2025-05-14 02:38:46 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:38:46.724150 | orchestrator | 2025-05-14 02:38:46 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:38:46.726681 | orchestrator | 2025-05-14 02:38:46 | INFO  | Task 753cab8f-c082-417f-8245-f75869be1e7a is in state STARTED 2025-05-14 02:38:46.727198 | orchestrator | 2025-05-14 02:38:46 | INFO  | Task 58bf8750-3b20-48bf-8efc-c974ee030ec3 is in state STARTED 2025-05-14 02:38:46.727211 | orchestrator | 2025-05-14 02:38:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:38:49.765916 | orchestrator | 2025-05-14 02:38:49 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:38:49.766260 | orchestrator | 2025-05-14 02:38:49 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:38:49.766925 | orchestrator | 2025-05-14 02:38:49 | INFO  | Task 753cab8f-c082-417f-8245-f75869be1e7a is in state STARTED 2025-05-14 02:38:49.771074 | orchestrator | 2025-05-14 02:38:49 | INFO  | Task 58bf8750-3b20-48bf-8efc-c974ee030ec3 is in state STARTED 2025-05-14 02:38:49.771129 | orchestrator | 2025-05-14 02:38:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:38:52.805521 | orchestrator | 2025-05-14 02:38:52 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:38:52.808507 | orchestrator | 2025-05-14 02:38:52 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:38:52.811274 | orchestrator | 2025-05-14 02:38:52 | INFO  | Task 753cab8f-c082-417f-8245-f75869be1e7a is in state STARTED 2025-05-14 02:38:52.813879 | orchestrator | 2025-05-14 02:38:52 | INFO  | Task 58bf8750-3b20-48bf-8efc-c974ee030ec3 is in state STARTED 2025-05-14 02:38:52.813943 | orchestrator | 2025-05-14 02:38:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:38:55.850154 | orchestrator | 2025-05-14 02:38:55 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:38:55.851823 | orchestrator | 2025-05-14 02:38:55 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:38:55.853645 | orchestrator | 2025-05-14 02:38:55 | INFO  | Task 753cab8f-c082-417f-8245-f75869be1e7a is in state STARTED 2025-05-14 02:38:55.856052 | orchestrator | 2025-05-14 02:38:55 | INFO  | Task 58bf8750-3b20-48bf-8efc-c974ee030ec3 is in state STARTED 2025-05-14 02:38:55.856329 | orchestrator | 2025-05-14 02:38:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:38:58.919008 | orchestrator | 2025-05-14 02:38:58 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:38:58.919429 | orchestrator | 2025-05-14 02:38:58 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:38:58.920919 | orchestrator | 2025-05-14 02:38:58 | INFO  | Task 753cab8f-c082-417f-8245-f75869be1e7a is in state STARTED 2025-05-14 02:38:58.921620 | orchestrator | 2025-05-14 02:38:58 | INFO  | Task 58bf8750-3b20-48bf-8efc-c974ee030ec3 is in state STARTED 2025-05-14 02:38:58.921693 | orchestrator | 2025-05-14 02:38:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:39:01.963950 | orchestrator | 2025-05-14 02:39:01 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:39:01.966373 | orchestrator | 2025-05-14 02:39:01 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:39:01.966468 | orchestrator | 2025-05-14 02:39:01 | INFO  | Task 753cab8f-c082-417f-8245-f75869be1e7a is in state STARTED 2025-05-14 02:39:01.966492 | orchestrator | 2025-05-14 02:39:01 | INFO  | Task 58bf8750-3b20-48bf-8efc-c974ee030ec3 is in state STARTED 2025-05-14 02:39:01.966512 | orchestrator | 2025-05-14 02:39:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:39:05.024527 | orchestrator | 2025-05-14 02:39:05 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:39:05.029554 | orchestrator | 2025-05-14 02:39:05 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:39:05.030115 | orchestrator | 2025-05-14 02:39:05 | INFO  | Task 753cab8f-c082-417f-8245-f75869be1e7a is in state STARTED 2025-05-14 02:39:05.031802 | orchestrator | 2025-05-14 02:39:05 | INFO  | Task 58bf8750-3b20-48bf-8efc-c974ee030ec3 is in state STARTED 2025-05-14 02:39:05.031850 | orchestrator | 2025-05-14 02:39:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:39:08.070485 | orchestrator | 2025-05-14 02:39:08 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:39:08.073421 | orchestrator | 2025-05-14 02:39:08 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:39:08.074897 | orchestrator | 2025-05-14 02:39:08 | INFO  | Task 753cab8f-c082-417f-8245-f75869be1e7a is in state STARTED 2025-05-14 02:39:08.077346 | orchestrator | 2025-05-14 02:39:08 | INFO  | Task 58bf8750-3b20-48bf-8efc-c974ee030ec3 is in state STARTED 2025-05-14 02:39:08.077415 | orchestrator | 2025-05-14 02:39:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:39:11.133460 | orchestrator | 2025-05-14 02:39:11 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:39:11.134397 | orchestrator | 2025-05-14 02:39:11 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:39:11.135845 | orchestrator | 2025-05-14 02:39:11 | INFO  | Task 753cab8f-c082-417f-8245-f75869be1e7a is in state STARTED 2025-05-14 02:39:11.137878 | orchestrator | 2025-05-14 02:39:11 | INFO  | Task 58bf8750-3b20-48bf-8efc-c974ee030ec3 is in state STARTED 2025-05-14 02:39:11.137925 | orchestrator | 2025-05-14 02:39:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:39:14.186346 | orchestrator | 2025-05-14 02:39:14 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:39:14.188430 | orchestrator | 2025-05-14 02:39:14 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:39:14.189676 | orchestrator | 2025-05-14 02:39:14 | INFO  | Task 753cab8f-c082-417f-8245-f75869be1e7a is in state STARTED 2025-05-14 02:39:14.191118 | orchestrator | 2025-05-14 02:39:14 | INFO  | Task 58bf8750-3b20-48bf-8efc-c974ee030ec3 is in state STARTED 2025-05-14 02:39:14.191183 | orchestrator | 2025-05-14 02:39:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:39:17.242495 | orchestrator | 2025-05-14 02:39:17 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:39:17.242671 | orchestrator | 2025-05-14 02:39:17 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:39:17.244727 | orchestrator | 2025-05-14 02:39:17 | INFO  | Task 753cab8f-c082-417f-8245-f75869be1e7a is in state STARTED 2025-05-14 02:39:17.247862 | orchestrator | 2025-05-14 02:39:17 | INFO  | Task 58bf8750-3b20-48bf-8efc-c974ee030ec3 is in state STARTED 2025-05-14 02:39:17.248056 | orchestrator | 2025-05-14 02:39:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:39:20.295519 | orchestrator | 2025-05-14 02:39:20 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:39:20.296703 | orchestrator | 2025-05-14 02:39:20 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:39:20.299658 | orchestrator | 2025-05-14 02:39:20 | INFO  | Task 753cab8f-c082-417f-8245-f75869be1e7a is in state STARTED 2025-05-14 02:39:20.300883 | orchestrator | 2025-05-14 02:39:20 | INFO  | Task 58bf8750-3b20-48bf-8efc-c974ee030ec3 is in state STARTED 2025-05-14 02:39:20.301030 | orchestrator | 2025-05-14 02:39:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:39:23.355537 | orchestrator | 2025-05-14 02:39:23 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:39:23.357856 | orchestrator | 2025-05-14 02:39:23 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:39:23.360007 | orchestrator | 2025-05-14 02:39:23 | INFO  | Task 753cab8f-c082-417f-8245-f75869be1e7a is in state STARTED 2025-05-14 02:39:23.362110 | orchestrator | 2025-05-14 02:39:23 | INFO  | Task 58bf8750-3b20-48bf-8efc-c974ee030ec3 is in state STARTED 2025-05-14 02:39:23.362241 | orchestrator | 2025-05-14 02:39:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:39:26.395119 | orchestrator | 2025-05-14 02:39:26 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:39:26.395869 | orchestrator | 2025-05-14 02:39:26 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:39:26.397809 | orchestrator | 2025-05-14 02:39:26 | INFO  | Task 753cab8f-c082-417f-8245-f75869be1e7a is in state STARTED 2025-05-14 02:39:26.400042 | orchestrator | 2025-05-14 02:39:26 | INFO  | Task 58bf8750-3b20-48bf-8efc-c974ee030ec3 is in state STARTED 2025-05-14 02:39:26.400097 | orchestrator | 2025-05-14 02:39:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:39:29.456065 | orchestrator | 2025-05-14 02:39:29 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:39:29.459384 | orchestrator | 2025-05-14 02:39:29 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:39:29.461917 | orchestrator | 2025-05-14 02:39:29 | INFO  | Task 753cab8f-c082-417f-8245-f75869be1e7a is in state STARTED 2025-05-14 02:39:29.463493 | orchestrator | 2025-05-14 02:39:29 | INFO  | Task 58bf8750-3b20-48bf-8efc-c974ee030ec3 is in state STARTED 2025-05-14 02:39:29.463541 | orchestrator | 2025-05-14 02:39:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:39:32.514746 | orchestrator | 2025-05-14 02:39:32 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:39:32.517921 | orchestrator | 2025-05-14 02:39:32 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:39:32.520411 | orchestrator | 2025-05-14 02:39:32 | INFO  | Task 753cab8f-c082-417f-8245-f75869be1e7a is in state STARTED 2025-05-14 02:39:32.522832 | orchestrator | 2025-05-14 02:39:32 | INFO  | Task 58bf8750-3b20-48bf-8efc-c974ee030ec3 is in state STARTED 2025-05-14 02:39:32.523073 | orchestrator | 2025-05-14 02:39:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:39:35.577307 | orchestrator | 2025-05-14 02:39:35 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:39:35.582105 | orchestrator | 2025-05-14 02:39:35 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:39:35.583304 | orchestrator | 2025-05-14 02:39:35 | INFO  | Task 753cab8f-c082-417f-8245-f75869be1e7a is in state STARTED 2025-05-14 02:39:35.584914 | orchestrator | 2025-05-14 02:39:35 | INFO  | Task 58bf8750-3b20-48bf-8efc-c974ee030ec3 is in state STARTED 2025-05-14 02:39:35.584983 | orchestrator | 2025-05-14 02:39:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:39:38.631764 | orchestrator | 2025-05-14 02:39:38 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:39:38.635291 | orchestrator | 2025-05-14 02:39:38 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:39:38.638342 | orchestrator | 2025-05-14 02:39:38 | INFO  | Task 753cab8f-c082-417f-8245-f75869be1e7a is in state STARTED 2025-05-14 02:39:38.642214 | orchestrator | 2025-05-14 02:39:38 | INFO  | Task 58bf8750-3b20-48bf-8efc-c974ee030ec3 is in state STARTED 2025-05-14 02:39:38.642268 | orchestrator | 2025-05-14 02:39:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:39:41.689269 | orchestrator | 2025-05-14 02:39:41 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:39:41.691385 | orchestrator | 2025-05-14 02:39:41 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:39:41.693455 | orchestrator | 2025-05-14 02:39:41 | INFO  | Task 753cab8f-c082-417f-8245-f75869be1e7a is in state STARTED 2025-05-14 02:39:41.695716 | orchestrator | 2025-05-14 02:39:41 | INFO  | Task 58bf8750-3b20-48bf-8efc-c974ee030ec3 is in state STARTED 2025-05-14 02:39:41.695746 | orchestrator | 2025-05-14 02:39:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:39:44.745298 | orchestrator | 2025-05-14 02:39:44 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:39:44.746364 | orchestrator | 2025-05-14 02:39:44 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:39:44.748290 | orchestrator | 2025-05-14 02:39:44 | INFO  | Task 753cab8f-c082-417f-8245-f75869be1e7a is in state STARTED 2025-05-14 02:39:44.751147 | orchestrator | 2025-05-14 02:39:44 | INFO  | Task 58bf8750-3b20-48bf-8efc-c974ee030ec3 is in state STARTED 2025-05-14 02:39:44.751188 | orchestrator | 2025-05-14 02:39:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:39:47.800617 | orchestrator | 2025-05-14 02:39:47 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:39:47.802475 | orchestrator | 2025-05-14 02:39:47 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:39:47.804280 | orchestrator | 2025-05-14 02:39:47 | INFO  | Task 753cab8f-c082-417f-8245-f75869be1e7a is in state STARTED 2025-05-14 02:39:47.806161 | orchestrator | 2025-05-14 02:39:47 | INFO  | Task 58bf8750-3b20-48bf-8efc-c974ee030ec3 is in state STARTED 2025-05-14 02:39:47.806232 | orchestrator | 2025-05-14 02:39:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:39:50.855440 | orchestrator | 2025-05-14 02:39:50 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:39:50.856480 | orchestrator | 2025-05-14 02:39:50 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:39:50.857950 | orchestrator | 2025-05-14 02:39:50 | INFO  | Task 753cab8f-c082-417f-8245-f75869be1e7a is in state STARTED 2025-05-14 02:39:50.859389 | orchestrator | 2025-05-14 02:39:50 | INFO  | Task 58bf8750-3b20-48bf-8efc-c974ee030ec3 is in state STARTED 2025-05-14 02:39:50.859466 | orchestrator | 2025-05-14 02:39:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:39:53.912726 | orchestrator | 2025-05-14 02:39:53 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:39:53.913867 | orchestrator | 2025-05-14 02:39:53 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:39:53.916766 | orchestrator | 2025-05-14 02:39:53 | INFO  | Task 753cab8f-c082-417f-8245-f75869be1e7a is in state STARTED 2025-05-14 02:39:53.918229 | orchestrator | 2025-05-14 02:39:53 | INFO  | Task 58bf8750-3b20-48bf-8efc-c974ee030ec3 is in state STARTED 2025-05-14 02:39:53.918504 | orchestrator | 2025-05-14 02:39:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:39:56.965276 | orchestrator | 2025-05-14 02:39:56 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:39:56.966250 | orchestrator | 2025-05-14 02:39:56 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:39:56.967990 | orchestrator | 2025-05-14 02:39:56 | INFO  | Task 753cab8f-c082-417f-8245-f75869be1e7a is in state STARTED 2025-05-14 02:39:56.970530 | orchestrator | 2025-05-14 02:39:56 | INFO  | Task 58bf8750-3b20-48bf-8efc-c974ee030ec3 is in state STARTED 2025-05-14 02:39:56.970646 | orchestrator | 2025-05-14 02:39:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:40:00.014727 | orchestrator | 2025-05-14 02:40:00 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:40:00.015754 | orchestrator | 2025-05-14 02:40:00 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:40:00.015786 | orchestrator | 2025-05-14 02:40:00 | INFO  | Task 753cab8f-c082-417f-8245-f75869be1e7a is in state STARTED 2025-05-14 02:40:00.017096 | orchestrator | 2025-05-14 02:40:00 | INFO  | Task 58bf8750-3b20-48bf-8efc-c974ee030ec3 is in state STARTED 2025-05-14 02:40:00.017128 | orchestrator | 2025-05-14 02:40:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:40:03.059930 | orchestrator | 2025-05-14 02:40:03 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:40:03.061409 | orchestrator | 2025-05-14 02:40:03 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:40:03.063037 | orchestrator | 2025-05-14 02:40:03 | INFO  | Task 753cab8f-c082-417f-8245-f75869be1e7a is in state STARTED 2025-05-14 02:40:03.064122 | orchestrator | 2025-05-14 02:40:03 | INFO  | Task 58bf8750-3b20-48bf-8efc-c974ee030ec3 is in state STARTED 2025-05-14 02:40:03.064258 | orchestrator | 2025-05-14 02:40:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:40:06.108619 | orchestrator | 2025-05-14 02:40:06 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:40:06.110956 | orchestrator | 2025-05-14 02:40:06 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:40:06.112343 | orchestrator | 2025-05-14 02:40:06 | INFO  | Task 753cab8f-c082-417f-8245-f75869be1e7a is in state STARTED 2025-05-14 02:40:06.114270 | orchestrator | 2025-05-14 02:40:06 | INFO  | Task 58bf8750-3b20-48bf-8efc-c974ee030ec3 is in state STARTED 2025-05-14 02:40:06.114541 | orchestrator | 2025-05-14 02:40:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:40:09.155447 | orchestrator | 2025-05-14 02:40:09 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:40:09.157422 | orchestrator | 2025-05-14 02:40:09 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:40:09.159091 | orchestrator | 2025-05-14 02:40:09 | INFO  | Task 753cab8f-c082-417f-8245-f75869be1e7a is in state STARTED 2025-05-14 02:40:09.159134 | orchestrator | 2025-05-14 02:40:09 | INFO  | Task 58bf8750-3b20-48bf-8efc-c974ee030ec3 is in state STARTED 2025-05-14 02:40:09.159147 | orchestrator | 2025-05-14 02:40:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:40:12.210541 | orchestrator | 2025-05-14 02:40:12 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:40:12.213359 | orchestrator | 2025-05-14 02:40:12 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:40:12.214752 | orchestrator | 2025-05-14 02:40:12 | INFO  | Task 753cab8f-c082-417f-8245-f75869be1e7a is in state STARTED 2025-05-14 02:40:12.219313 | orchestrator | 2025-05-14 02:40:12.219370 | orchestrator | 2025-05-14 02:40:12.219383 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:40:12.219395 | orchestrator | 2025-05-14 02:40:12.219406 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:40:12.219417 | orchestrator | Wednesday 14 May 2025 02:38:26 +0000 (0:00:00.306) 0:00:00.306 ********* 2025-05-14 02:40:12.219429 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:40:12.219456 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:40:12.219478 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:40:12.219489 | orchestrator | 2025-05-14 02:40:12.219500 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:40:12.219511 | orchestrator | Wednesday 14 May 2025 02:38:26 +0000 (0:00:00.389) 0:00:00.696 ********* 2025-05-14 02:40:12.219522 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-05-14 02:40:12.219533 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-05-14 02:40:12.219544 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-05-14 02:40:12.219707 | orchestrator | 2025-05-14 02:40:12.220178 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-05-14 02:40:12.220199 | orchestrator | 2025-05-14 02:40:12.220218 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-14 02:40:12.220235 | orchestrator | Wednesday 14 May 2025 02:38:26 +0000 (0:00:00.281) 0:00:00.977 ********* 2025-05-14 02:40:12.220254 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:40:12.220272 | orchestrator | 2025-05-14 02:40:12.220290 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-05-14 02:40:12.220330 | orchestrator | Wednesday 14 May 2025 02:38:27 +0000 (0:00:00.746) 0:00:01.724 ********* 2025-05-14 02:40:12.220359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-14 02:40:12.220438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-14 02:40:12.220455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-14 02:40:12.220475 | orchestrator | 2025-05-14 02:40:12.220488 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-05-14 02:40:12.220499 | orchestrator | Wednesday 14 May 2025 02:38:29 +0000 (0:00:01.650) 0:00:03.374 ********* 2025-05-14 02:40:12.220510 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:40:12.220521 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:40:12.220532 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:40:12.220543 | orchestrator | 2025-05-14 02:40:12.220584 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-14 02:40:12.220596 | orchestrator | Wednesday 14 May 2025 02:38:29 +0000 (0:00:00.313) 0:00:03.688 ********* 2025-05-14 02:40:12.220616 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-14 02:40:12.220628 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-05-14 02:40:12.220639 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-05-14 02:40:12.220650 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-05-14 02:40:12.220660 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-05-14 02:40:12.220671 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-05-14 02:40:12.220681 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-05-14 02:40:12.220692 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-14 02:40:12.220702 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-05-14 02:40:12.220713 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-05-14 02:40:12.220723 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-05-14 02:40:12.220734 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-05-14 02:40:12.220745 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-05-14 02:40:12.220760 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-05-14 02:40:12.220771 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-14 02:40:12.220782 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-05-14 02:40:12.220792 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-05-14 02:40:12.220803 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-05-14 02:40:12.220813 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-05-14 02:40:12.220832 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-05-14 02:40:12.220843 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-05-14 02:40:12.220855 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-05-14 02:40:12.220868 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-05-14 02:40:12.220879 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-05-14 02:40:12.220891 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-05-14 02:40:12.220901 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'heat', 'enabled': True}) 2025-05-14 02:40:12.220913 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-05-14 02:40:12.220924 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-05-14 02:40:12.220935 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-05-14 02:40:12.220945 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-05-14 02:40:12.220956 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-05-14 02:40:12.220967 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-05-14 02:40:12.220978 | orchestrator | 2025-05-14 02:40:12.220989 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-14 02:40:12.221000 | orchestrator | Wednesday 14 May 2025 02:38:30 +0000 (0:00:01.071) 0:00:04.759 ********* 2025-05-14 02:40:12.221010 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:40:12.221021 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:40:12.221032 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:40:12.221043 | orchestrator | 2025-05-14 02:40:12.221053 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-14 02:40:12.221064 | orchestrator | Wednesday 14 May 2025 02:38:31 +0000 (0:00:00.433) 0:00:05.193 ********* 2025-05-14 02:40:12.221075 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:40:12.221087 | orchestrator | 2025-05-14 02:40:12.221104 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-14 02:40:12.221115 | orchestrator | Wednesday 14 May 2025 02:38:31 +0000 (0:00:00.118) 0:00:05.312 ********* 2025-05-14 02:40:12.221126 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:40:12.221137 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:40:12.221147 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:40:12.221158 | orchestrator | 2025-05-14 02:40:12.221169 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-14 02:40:12.221180 | orchestrator | Wednesday 14 May 2025 02:38:31 +0000 (0:00:00.412) 0:00:05.724 ********* 2025-05-14 02:40:12.221191 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:40:12.221201 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:40:12.221212 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:40:12.221223 | orchestrator | 2025-05-14 02:40:12.221234 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-14 02:40:12.221251 | orchestrator | Wednesday 14 May 2025 02:38:31 +0000 (0:00:00.302) 0:00:06.027 ********* 2025-05-14 02:40:12.221262 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:40:12.221273 | orchestrator | 2025-05-14 02:40:12.221283 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-14 02:40:12.221294 | orchestrator | Wednesday 14 May 2025 02:38:32 +0000 (0:00:00.104) 0:00:06.131 ********* 2025-05-14 02:40:12.221305 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:40:12.221316 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:40:12.221326 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:40:12.221337 | orchestrator | 2025-05-14 02:40:12.221347 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-14 02:40:12.221358 | orchestrator | Wednesday 14 May 2025 02:38:32 +0000 (0:00:00.423) 0:00:06.555 ********* 2025-05-14 02:40:12.221369 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:40:12.221380 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:40:12.221391 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:40:12.221402 | orchestrator | 2025-05-14 02:40:12.221412 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-14 02:40:12.221423 | orchestrator | Wednesday 14 May 2025 02:38:32 +0000 (0:00:00.421) 0:00:06.976 ********* 2025-05-14 02:40:12.221434 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:40:12.221445 | orchestrator | 2025-05-14 02:40:12.221455 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-14 02:40:12.221466 | orchestrator | Wednesday 14 May 2025 02:38:33 +0000 (0:00:00.140) 0:00:07.117 ********* 2025-05-14 02:40:12.221477 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:40:12.221487 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:40:12.221498 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:40:12.221509 | orchestrator | 2025-05-14 02:40:12.221520 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-14 02:40:12.221531 | orchestrator | Wednesday 14 May 2025 02:38:33 +0000 (0:00:00.439) 0:00:07.557 ********* 2025-05-14 02:40:12.221541 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:40:12.221581 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:40:12.221595 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:40:12.221606 | orchestrator | 2025-05-14 02:40:12.221616 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-14 02:40:12.221627 | orchestrator | Wednesday 14 May 2025 02:38:33 +0000 (0:00:00.455) 0:00:08.012 ********* 2025-05-14 02:40:12.221637 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:40:12.221648 | orchestrator | 2025-05-14 02:40:12.221659 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-14 02:40:12.221669 | orchestrator | Wednesday 14 May 2025 02:38:34 +0000 (0:00:00.121) 0:00:08.134 ********* 2025-05-14 02:40:12.221680 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:40:12.221691 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:40:12.221701 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:40:12.221712 | orchestrator | 2025-05-14 02:40:12.221722 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-14 02:40:12.221733 | orchestrator | Wednesday 14 May 2025 02:38:34 +0000 (0:00:00.321) 0:00:08.455 ********* 2025-05-14 02:40:12.221743 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:40:12.221754 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:40:12.221765 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:40:12.221775 | orchestrator | 2025-05-14 02:40:12.221786 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-14 02:40:12.221796 | orchestrator | Wednesday 14 May 2025 02:38:34 +0000 (0:00:00.260) 0:00:08.716 ********* 2025-05-14 02:40:12.221807 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:40:12.221818 | orchestrator | 2025-05-14 02:40:12.221828 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-14 02:40:12.221839 | orchestrator | Wednesday 14 May 2025 02:38:34 +0000 (0:00:00.177) 0:00:08.893 ********* 2025-05-14 02:40:12.221849 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:40:12.221867 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:40:12.221877 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:40:12.221888 | orchestrator | 2025-05-14 02:40:12.221898 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-14 02:40:12.221909 | orchestrator | Wednesday 14 May 2025 02:38:35 +0000 (0:00:00.258) 0:00:09.152 ********* 2025-05-14 02:40:12.221920 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:40:12.221930 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:40:12.221941 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:40:12.221951 | orchestrator | 2025-05-14 02:40:12.221962 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-14 02:40:12.221973 | orchestrator | Wednesday 14 May 2025 02:38:35 +0000 (0:00:00.398) 0:00:09.550 ********* 2025-05-14 02:40:12.221983 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:40:12.221994 | orchestrator | 2025-05-14 02:40:12.222004 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-14 02:40:12.222015 | orchestrator | Wednesday 14 May 2025 02:38:35 +0000 (0:00:00.125) 0:00:09.676 ********* 2025-05-14 02:40:12.222084 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:40:12.222095 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:40:12.222105 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:40:12.222116 | orchestrator | 2025-05-14 02:40:12.222126 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-14 02:40:12.222137 | orchestrator | Wednesday 14 May 2025 02:38:36 +0000 (0:00:00.532) 0:00:10.208 ********* 2025-05-14 02:40:12.222155 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:40:12.222167 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:40:12.222177 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:40:12.222188 | orchestrator | 2025-05-14 02:40:12.222199 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-14 02:40:12.222210 | orchestrator | Wednesday 14 May 2025 02:38:36 +0000 (0:00:00.508) 0:00:10.717 ********* 2025-05-14 02:40:12.222220 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:40:12.222231 | orchestrator | 2025-05-14 02:40:12.222242 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-14 02:40:12.222252 | orchestrator | Wednesday 14 May 2025 02:38:36 +0000 (0:00:00.133) 0:00:10.850 ********* 2025-05-14 02:40:12.222263 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:40:12.222274 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:40:12.222284 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:40:12.222295 | orchestrator | 2025-05-14 02:40:12.222305 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-14 02:40:12.222405 | orchestrator | Wednesday 14 May 2025 02:38:37 +0000 (0:00:00.335) 0:00:11.186 ********* 2025-05-14 02:40:12.222427 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:40:12.222438 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:40:12.222448 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:40:12.222459 | orchestrator | 2025-05-14 02:40:12.222470 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-14 02:40:12.222480 | orchestrator | Wednesday 14 May 2025 02:38:37 +0000 (0:00:00.497) 0:00:11.683 ********* 2025-05-14 02:40:12.222491 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:40:12.222502 | orchestrator | 2025-05-14 02:40:12.222512 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-14 02:40:12.222528 | orchestrator | Wednesday 14 May 2025 02:38:37 +0000 (0:00:00.287) 0:00:11.971 ********* 2025-05-14 02:40:12.222538 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:40:12.222589 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:40:12.222603 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:40:12.222614 | orchestrator | 2025-05-14 02:40:12.222625 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-14 02:40:12.222636 | orchestrator | Wednesday 14 May 2025 02:38:38 +0000 (0:00:00.230) 0:00:12.202 ********* 2025-05-14 02:40:12.222646 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:40:12.222668 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:40:12.222678 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:40:12.222689 | orchestrator | 2025-05-14 02:40:12.222700 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-14 02:40:12.222710 | orchestrator | Wednesday 14 May 2025 02:38:38 +0000 (0:00:00.382) 0:00:12.584 ********* 2025-05-14 02:40:12.222721 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:40:12.222731 | orchestrator | 2025-05-14 02:40:12.222742 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-14 02:40:12.222752 | orchestrator | Wednesday 14 May 2025 02:38:38 +0000 (0:00:00.114) 0:00:12.699 ********* 2025-05-14 02:40:12.222763 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:40:12.222774 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:40:12.222784 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:40:12.222802 | orchestrator | 2025-05-14 02:40:12.222820 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-14 02:40:12.222839 | orchestrator | Wednesday 14 May 2025 02:38:39 +0000 (0:00:00.433) 0:00:13.132 ********* 2025-05-14 02:40:12.222859 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:40:12.222878 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:40:12.222892 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:40:12.222903 | orchestrator | 2025-05-14 02:40:12.222913 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-14 02:40:12.222924 | orchestrator | Wednesday 14 May 2025 02:38:39 +0000 (0:00:00.430) 0:00:13.563 ********* 2025-05-14 02:40:12.222935 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:40:12.222945 | orchestrator | 2025-05-14 02:40:12.222956 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-14 02:40:12.222967 | orchestrator | Wednesday 14 May 2025 02:38:39 +0000 (0:00:00.152) 0:00:13.716 ********* 2025-05-14 02:40:12.222977 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:40:12.222988 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:40:12.222998 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:40:12.223009 | orchestrator | 2025-05-14 02:40:12.223020 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-14 02:40:12.223031 | orchestrator | Wednesday 14 May 2025 02:38:40 +0000 (0:00:00.441) 0:00:14.157 ********* 2025-05-14 02:40:12.223042 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:40:12.223052 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:40:12.223063 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:40:12.223073 | orchestrator | 2025-05-14 02:40:12.223084 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-14 02:40:12.223095 | orchestrator | Wednesday 14 May 2025 02:38:40 +0000 (0:00:00.407) 0:00:14.564 ********* 2025-05-14 02:40:12.223105 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:40:12.223116 | orchestrator | 2025-05-14 02:40:12.223126 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-14 02:40:12.223137 | orchestrator | Wednesday 14 May 2025 02:38:40 +0000 (0:00:00.121) 0:00:14.686 ********* 2025-05-14 02:40:12.223148 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:40:12.223158 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:40:12.223169 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:40:12.223194 | orchestrator | 2025-05-14 02:40:12.223205 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-05-14 02:40:12.223216 | orchestrator | Wednesday 14 May 2025 02:38:41 +0000 (0:00:00.411) 0:00:15.097 ********* 2025-05-14 02:40:12.223227 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:40:12.223237 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:40:12.223248 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:40:12.223258 | orchestrator | 2025-05-14 02:40:12.223269 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-05-14 02:40:12.223279 | orchestrator | Wednesday 14 May 2025 02:38:44 +0000 (0:00:03.217) 0:00:18.314 ********* 2025-05-14 02:40:12.223290 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-14 02:40:12.223319 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-14 02:40:12.223330 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-14 02:40:12.223341 | orchestrator | 2025-05-14 02:40:12.223352 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-05-14 02:40:12.223362 | orchestrator | Wednesday 14 May 2025 02:38:47 +0000 (0:00:03.389) 0:00:21.704 ********* 2025-05-14 02:40:12.223373 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-14 02:40:12.223384 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-14 02:40:12.223395 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-14 02:40:12.223406 | orchestrator | 2025-05-14 02:40:12.223416 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-05-14 02:40:12.223427 | orchestrator | Wednesday 14 May 2025 02:38:50 +0000 (0:00:03.080) 0:00:24.784 ********* 2025-05-14 02:40:12.223438 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-14 02:40:12.223448 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-14 02:40:12.223459 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-14 02:40:12.223469 | orchestrator | 2025-05-14 02:40:12.223485 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-05-14 02:40:12.223496 | orchestrator | Wednesday 14 May 2025 02:38:53 +0000 (0:00:02.609) 0:00:27.394 ********* 2025-05-14 02:40:12.223507 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:40:12.223517 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:40:12.223528 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:40:12.223538 | orchestrator | 2025-05-14 02:40:12.223574 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-05-14 02:40:12.223586 | orchestrator | Wednesday 14 May 2025 02:38:53 +0000 (0:00:00.465) 0:00:27.859 ********* 2025-05-14 02:40:12.223596 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:40:12.223607 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:40:12.223618 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:40:12.223628 | orchestrator | 2025-05-14 02:40:12.223639 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-14 02:40:12.223650 | orchestrator | Wednesday 14 May 2025 02:38:54 +0000 (0:00:00.361) 0:00:28.221 ********* 2025-05-14 02:40:12.223660 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:40:12.223671 | orchestrator | 2025-05-14 02:40:12.223682 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-05-14 02:40:12.223692 | orchestrator | Wednesday 14 May 2025 02:38:54 +0000 (0:00:00.543) 0:00:28.765 ********* 2025-05-14 02:40:12.223723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-14 02:40:12.223765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-14 02:40:12.223797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-14 02:40:12.223833 | orchestrator | 2025-05-14 02:40:12.223853 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-05-14 02:40:12.223871 | orchestrator | Wednesday 14 May 2025 02:38:56 +0000 (0:00:01.624) 0:00:30.390 ********* 2025-05-14 02:40:12.223894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-14 02:40:12.223907 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:40:12.223936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-14 02:40:12.223949 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:40:12.223966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-14 02:40:12.223984 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:40:12.223995 | orchestrator | 2025-05-14 02:40:12.224006 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-05-14 02:40:12.224017 | orchestrator | Wednesday 14 May 2025 02:38:57 +0000 (0:00:00.790) 0:00:31.181 ********* 2025-05-14 02:40:12.224043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-14 02:40:12.224056 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:40:12.224067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-14 02:40:12.224086 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:40:12.224117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-14 02:40:12.224130 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:40:12.224141 | orchestrator | 2025-05-14 02:40:12.224152 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-05-14 02:40:12.224163 | orchestrator | Wednesday 14 May 2025 02:38:58 +0000 (0:00:01.001) 0:00:32.182 ********* 2025-05-14 02:40:12.224180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-14 02:40:12.224207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-14 02:40:12.224228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-14 02:40:12.224247 | orchestrator | 2025-05-14 02:40:12.224258 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-14 02:40:12.224269 | orchestrator | Wednesday 14 May 2025 02:39:03 +0000 (0:00:05.161) 0:00:37.343 ********* 2025-05-14 02:40:12.224280 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:40:12.224291 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:40:12.224301 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:40:12.224312 | orchestrator | 2025-05-14 02:40:12.224323 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-14 02:40:12.224334 | orchestrator | Wednesday 14 May 2025 02:39:03 +0000 (0:00:00.442) 0:00:37.785 ********* 2025-05-14 02:40:12.224344 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:40:12.224355 | orchestrator | 2025-05-14 02:40:12.224366 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-05-14 02:40:12.224376 | orchestrator | Wednesday 14 May 2025 02:39:04 +0000 (0:00:00.662) 0:00:38.448 ********* 2025-05-14 02:40:12.224387 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:40:12.224398 | orchestrator | 2025-05-14 02:40:12.224408 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-05-14 02:40:12.224424 | orchestrator | Wednesday 14 May 2025 02:39:06 +0000 (0:00:02.537) 0:00:40.986 ********* 2025-05-14 02:40:12.224435 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:40:12.224445 | orchestrator | 2025-05-14 02:40:12.224456 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-05-14 02:40:12.224467 | orchestrator | Wednesday 14 May 2025 02:39:09 +0000 (0:00:02.276) 0:00:43.262 ********* 2025-05-14 02:40:12.224477 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:40:12.224488 | orchestrator | 2025-05-14 02:40:12.224498 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-14 02:40:12.224509 | orchestrator | Wednesday 14 May 2025 02:39:23 +0000 (0:00:14.311) 0:00:57.574 ********* 2025-05-14 02:40:12.224527 | orchestrator | 2025-05-14 02:40:12.224538 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-14 02:40:12.224574 | orchestrator | Wednesday 14 May 2025 02:39:23 +0000 (0:00:00.055) 0:00:57.630 ********* 2025-05-14 02:40:12.224592 | orchestrator | 2025-05-14 02:40:12.224603 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-14 02:40:12.224614 | orchestrator | Wednesday 14 May 2025 02:39:23 +0000 (0:00:00.189) 0:00:57.819 ********* 2025-05-14 02:40:12.224624 | orchestrator | 2025-05-14 02:40:12.224635 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-05-14 02:40:12.224646 | orchestrator | Wednesday 14 May 2025 02:39:23 +0000 (0:00:00.056) 0:00:57.876 ********* 2025-05-14 02:40:12.224656 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:40:12.224667 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:40:12.224678 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:40:12.224688 | orchestrator | 2025-05-14 02:40:12.224699 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:40:12.224710 | orchestrator | testbed-node-0 : ok=39  changed=11  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-05-14 02:40:12.224721 | orchestrator | testbed-node-1 : ok=36  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-14 02:40:12.224732 | orchestrator | testbed-node-2 : ok=36  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-14 02:40:12.224742 | orchestrator | 2025-05-14 02:40:12.224753 | orchestrator | 2025-05-14 02:40:12.224763 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:40:12.224774 | orchestrator | Wednesday 14 May 2025 02:40:09 +0000 (0:00:45.534) 0:01:43.411 ********* 2025-05-14 02:40:12.224784 | orchestrator | =============================================================================== 2025-05-14 02:40:12.224795 | orchestrator | horizon : Restart horizon container ------------------------------------ 45.54s 2025-05-14 02:40:12.224806 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 14.31s 2025-05-14 02:40:12.224816 | orchestrator | horizon : Deploy horizon container -------------------------------------- 5.16s 2025-05-14 02:40:12.224827 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 3.39s 2025-05-14 02:40:12.224837 | orchestrator | horizon : Copying over config.json files for services ------------------- 3.22s 2025-05-14 02:40:12.224848 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 3.08s 2025-05-14 02:40:12.224859 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.61s 2025-05-14 02:40:12.224869 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.54s 2025-05-14 02:40:12.224880 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.28s 2025-05-14 02:40:12.224890 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.65s 2025-05-14 02:40:12.224901 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.62s 2025-05-14 02:40:12.224911 | orchestrator | horizon : include_tasks ------------------------------------------------- 1.07s 2025-05-14 02:40:12.224922 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.00s 2025-05-14 02:40:12.224939 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.79s 2025-05-14 02:40:12.224950 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.75s 2025-05-14 02:40:12.224960 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.66s 2025-05-14 02:40:12.224971 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.54s 2025-05-14 02:40:12.224981 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.53s 2025-05-14 02:40:12.224992 | orchestrator | horizon : Update policy file name --------------------------------------- 0.51s 2025-05-14 02:40:12.225009 | orchestrator | horizon : Update policy file name --------------------------------------- 0.50s 2025-05-14 02:40:12.225020 | orchestrator | 2025-05-14 02:40:12 | INFO  | Task 58bf8750-3b20-48bf-8efc-c974ee030ec3 is in state SUCCESS 2025-05-14 02:40:12.225031 | orchestrator | 2025-05-14 02:40:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:40:15.262818 | orchestrator | 2025-05-14 02:40:15 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:40:15.264027 | orchestrator | 2025-05-14 02:40:15 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:40:15.264668 | orchestrator | 2025-05-14 02:40:15 | INFO  | Task 753cab8f-c082-417f-8245-f75869be1e7a is in state STARTED 2025-05-14 02:40:15.265093 | orchestrator | 2025-05-14 02:40:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:40:18.305673 | orchestrator | 2025-05-14 02:40:18 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:40:18.305775 | orchestrator | 2025-05-14 02:40:18 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:40:18.306687 | orchestrator | 2025-05-14 02:40:18 | INFO  | Task 753cab8f-c082-417f-8245-f75869be1e7a is in state STARTED 2025-05-14 02:40:18.306768 | orchestrator | 2025-05-14 02:40:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:40:21.359818 | orchestrator | 2025-05-14 02:40:21 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:40:21.362230 | orchestrator | 2025-05-14 02:40:21 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:40:21.364104 | orchestrator | 2025-05-14 02:40:21 | INFO  | Task 753cab8f-c082-417f-8245-f75869be1e7a is in state STARTED 2025-05-14 02:40:21.364134 | orchestrator | 2025-05-14 02:40:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:40:24.411666 | orchestrator | 2025-05-14 02:40:24 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:40:24.418296 | orchestrator | 2025-05-14 02:40:24 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:40:24.418386 | orchestrator | 2025-05-14 02:40:24 | INFO  | Task 753cab8f-c082-417f-8245-f75869be1e7a is in state SUCCESS 2025-05-14 02:40:24.418401 | orchestrator | 2025-05-14 02:40:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:40:24.420096 | orchestrator | 2025-05-14 02:40:24.420158 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-14 02:40:24.420170 | orchestrator | 2025-05-14 02:40:24.420180 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-05-14 02:40:24.420188 | orchestrator | 2025-05-14 02:40:24.420196 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-05-14 02:40:24.420205 | orchestrator | Wednesday 14 May 2025 02:38:14 +0000 (0:00:01.134) 0:00:01.134 ********* 2025-05-14 02:40:24.420215 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:40:24.420224 | orchestrator | 2025-05-14 02:40:24.420233 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-05-14 02:40:24.420241 | orchestrator | Wednesday 14 May 2025 02:38:15 +0000 (0:00:00.522) 0:00:01.657 ********* 2025-05-14 02:40:24.420251 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-0) 2025-05-14 02:40:24.420261 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-1) 2025-05-14 02:40:24.420270 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-2) 2025-05-14 02:40:24.420278 | orchestrator | 2025-05-14 02:40:24.420286 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-05-14 02:40:24.420317 | orchestrator | Wednesday 14 May 2025 02:38:15 +0000 (0:00:00.711) 0:00:02.369 ********* 2025-05-14 02:40:24.420323 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:40:24.420329 | orchestrator | 2025-05-14 02:40:24.420334 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-05-14 02:40:24.420339 | orchestrator | Wednesday 14 May 2025 02:38:16 +0000 (0:00:00.566) 0:00:02.935 ********* 2025-05-14 02:40:24.420345 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:40:24.420350 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:40:24.420355 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:40:24.420360 | orchestrator | 2025-05-14 02:40:24.420366 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-05-14 02:40:24.420371 | orchestrator | Wednesday 14 May 2025 02:38:16 +0000 (0:00:00.583) 0:00:03.518 ********* 2025-05-14 02:40:24.420376 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:40:24.420381 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:40:24.420386 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:40:24.420391 | orchestrator | 2025-05-14 02:40:24.420396 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-05-14 02:40:24.420401 | orchestrator | Wednesday 14 May 2025 02:38:17 +0000 (0:00:00.260) 0:00:03.779 ********* 2025-05-14 02:40:24.420406 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:40:24.420411 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:40:24.420416 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:40:24.420421 | orchestrator | 2025-05-14 02:40:24.420426 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-05-14 02:40:24.420431 | orchestrator | Wednesday 14 May 2025 02:38:17 +0000 (0:00:00.715) 0:00:04.494 ********* 2025-05-14 02:40:24.420437 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:40:24.420442 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:40:24.420447 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:40:24.420452 | orchestrator | 2025-05-14 02:40:24.420457 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-05-14 02:40:24.420462 | orchestrator | Wednesday 14 May 2025 02:38:18 +0000 (0:00:00.273) 0:00:04.767 ********* 2025-05-14 02:40:24.420467 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:40:24.420472 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:40:24.420477 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:40:24.420482 | orchestrator | 2025-05-14 02:40:24.420487 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-05-14 02:40:24.420492 | orchestrator | Wednesday 14 May 2025 02:38:18 +0000 (0:00:00.275) 0:00:05.043 ********* 2025-05-14 02:40:24.420499 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:40:24.420507 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:40:24.420515 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:40:24.420524 | orchestrator | 2025-05-14 02:40:24.420571 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-05-14 02:40:24.420578 | orchestrator | Wednesday 14 May 2025 02:38:18 +0000 (0:00:00.312) 0:00:05.355 ********* 2025-05-14 02:40:24.420583 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:40:24.420589 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:40:24.420594 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:40:24.420599 | orchestrator | 2025-05-14 02:40:24.420604 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-05-14 02:40:24.420609 | orchestrator | Wednesday 14 May 2025 02:38:19 +0000 (0:00:00.417) 0:00:05.772 ********* 2025-05-14 02:40:24.420614 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:40:24.420619 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:40:24.420624 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:40:24.420629 | orchestrator | 2025-05-14 02:40:24.420634 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-14 02:40:24.420639 | orchestrator | Wednesday 14 May 2025 02:38:19 +0000 (0:00:00.277) 0:00:06.050 ********* 2025-05-14 02:40:24.420645 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-14 02:40:24.420655 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-14 02:40:24.420660 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-14 02:40:24.420665 | orchestrator | 2025-05-14 02:40:24.420670 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-05-14 02:40:24.420675 | orchestrator | Wednesday 14 May 2025 02:38:20 +0000 (0:00:00.725) 0:00:06.775 ********* 2025-05-14 02:40:24.420681 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:40:24.420687 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:40:24.420693 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:40:24.420699 | orchestrator | 2025-05-14 02:40:24.420705 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-05-14 02:40:24.420711 | orchestrator | Wednesday 14 May 2025 02:38:20 +0000 (0:00:00.438) 0:00:07.213 ********* 2025-05-14 02:40:24.420730 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-14 02:40:24.420736 | orchestrator | changed: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-14 02:40:24.420742 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-14 02:40:24.420748 | orchestrator | 2025-05-14 02:40:24.420754 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-05-14 02:40:24.420760 | orchestrator | Wednesday 14 May 2025 02:38:22 +0000 (0:00:02.104) 0:00:09.318 ********* 2025-05-14 02:40:24.420766 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-14 02:40:24.420772 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-14 02:40:24.420777 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-14 02:40:24.420783 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:40:24.420789 | orchestrator | 2025-05-14 02:40:24.420795 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-05-14 02:40:24.420801 | orchestrator | Wednesday 14 May 2025 02:38:23 +0000 (0:00:00.427) 0:00:09.746 ********* 2025-05-14 02:40:24.420809 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-14 02:40:24.420818 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-14 02:40:24.420824 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-14 02:40:24.420830 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:40:24.420836 | orchestrator | 2025-05-14 02:40:24.420842 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-05-14 02:40:24.420848 | orchestrator | Wednesday 14 May 2025 02:38:23 +0000 (0:00:00.658) 0:00:10.404 ********* 2025-05-14 02:40:24.420856 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-14 02:40:24.420864 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-14 02:40:24.421112 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-14 02:40:24.421120 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:40:24.421125 | orchestrator | 2025-05-14 02:40:24.421130 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-05-14 02:40:24.421135 | orchestrator | Wednesday 14 May 2025 02:38:24 +0000 (0:00:00.173) 0:00:10.577 ********* 2025-05-14 02:40:24.421142 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '17727295c928', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-14 02:38:21.386408', 'end': '2025-05-14 02:38:21.429834', 'delta': '0:00:00.043426', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['17727295c928'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-05-14 02:40:24.421157 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '59d5e18c50f5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-14 02:38:21.929187', 'end': '2025-05-14 02:38:21.972145', 'delta': '0:00:00.042958', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['59d5e18c50f5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-05-14 02:40:24.421164 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '02d74a4546cf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-14 02:38:22.467426', 'end': '2025-05-14 02:38:22.503000', 'delta': '0:00:00.035574', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['02d74a4546cf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-05-14 02:40:24.421169 | orchestrator | 2025-05-14 02:40:24.421174 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-05-14 02:40:24.421180 | orchestrator | Wednesday 14 May 2025 02:38:24 +0000 (0:00:00.197) 0:00:10.774 ********* 2025-05-14 02:40:24.421185 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:40:24.421190 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:40:24.421195 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:40:24.421200 | orchestrator | 2025-05-14 02:40:24.421205 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-05-14 02:40:24.421210 | orchestrator | Wednesday 14 May 2025 02:38:24 +0000 (0:00:00.489) 0:00:11.264 ********* 2025-05-14 02:40:24.421215 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-05-14 02:40:24.421221 | orchestrator | 2025-05-14 02:40:24.421230 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-05-14 02:40:24.421238 | orchestrator | Wednesday 14 May 2025 02:38:27 +0000 (0:00:02.378) 0:00:13.643 ********* 2025-05-14 02:40:24.421251 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:40:24.421260 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:40:24.421268 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:40:24.421277 | orchestrator | 2025-05-14 02:40:24.421285 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-05-14 02:40:24.421294 | orchestrator | Wednesday 14 May 2025 02:38:27 +0000 (0:00:00.522) 0:00:14.166 ********* 2025-05-14 02:40:24.421302 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:40:24.421310 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:40:24.421320 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:40:24.421326 | orchestrator | 2025-05-14 02:40:24.421331 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-14 02:40:24.421336 | orchestrator | Wednesday 14 May 2025 02:38:28 +0000 (0:00:00.499) 0:00:14.666 ********* 2025-05-14 02:40:24.421341 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:40:24.421346 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:40:24.421351 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:40:24.421356 | orchestrator | 2025-05-14 02:40:24.421365 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-05-14 02:40:24.421370 | orchestrator | Wednesday 14 May 2025 02:38:28 +0000 (0:00:00.327) 0:00:14.993 ********* 2025-05-14 02:40:24.421375 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:40:24.421380 | orchestrator | 2025-05-14 02:40:24.421385 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-05-14 02:40:24.421390 | orchestrator | Wednesday 14 May 2025 02:38:28 +0000 (0:00:00.113) 0:00:15.106 ********* 2025-05-14 02:40:24.421395 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:40:24.421400 | orchestrator | 2025-05-14 02:40:24.421405 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-14 02:40:24.421410 | orchestrator | Wednesday 14 May 2025 02:38:28 +0000 (0:00:00.237) 0:00:15.344 ********* 2025-05-14 02:40:24.421415 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:40:24.421421 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:40:24.421426 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:40:24.421430 | orchestrator | 2025-05-14 02:40:24.421435 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-05-14 02:40:24.421440 | orchestrator | Wednesday 14 May 2025 02:38:29 +0000 (0:00:00.541) 0:00:15.886 ********* 2025-05-14 02:40:24.421445 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:40:24.421450 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:40:24.421456 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:40:24.421461 | orchestrator | 2025-05-14 02:40:24.421466 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-05-14 02:40:24.421471 | orchestrator | Wednesday 14 May 2025 02:38:29 +0000 (0:00:00.348) 0:00:16.234 ********* 2025-05-14 02:40:24.421476 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:40:24.421481 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:40:24.421486 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:40:24.421491 | orchestrator | 2025-05-14 02:40:24.421496 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-05-14 02:40:24.421502 | orchestrator | Wednesday 14 May 2025 02:38:29 +0000 (0:00:00.326) 0:00:16.561 ********* 2025-05-14 02:40:24.421507 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:40:24.421512 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:40:24.421522 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:40:24.421527 | orchestrator | 2025-05-14 02:40:24.421532 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-05-14 02:40:24.421538 | orchestrator | Wednesday 14 May 2025 02:38:30 +0000 (0:00:00.343) 0:00:16.905 ********* 2025-05-14 02:40:24.421586 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:40:24.421595 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:40:24.421604 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:40:24.421612 | orchestrator | 2025-05-14 02:40:24.421630 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-05-14 02:40:24.421636 | orchestrator | Wednesday 14 May 2025 02:38:30 +0000 (0:00:00.506) 0:00:17.412 ********* 2025-05-14 02:40:24.421641 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:40:24.421646 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:40:24.421651 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:40:24.421656 | orchestrator | 2025-05-14 02:40:24.421662 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-14 02:40:24.421667 | orchestrator | Wednesday 14 May 2025 02:38:31 +0000 (0:00:00.311) 0:00:17.724 ********* 2025-05-14 02:40:24.421672 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:40:24.421678 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:40:24.421683 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:40:24.421688 | orchestrator | 2025-05-14 02:40:24.421693 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-05-14 02:40:24.421698 | orchestrator | Wednesday 14 May 2025 02:38:31 +0000 (0:00:00.310) 0:00:18.035 ********* 2025-05-14 02:40:24.421704 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--cb58592c--122c--52e3--870d--c9748cfaa53d-osd--block--cb58592c--122c--52e3--870d--c9748cfaa53d', 'dm-uuid-LVM-tHFsPa1Zsw1yoNENjzY3utZu0eTPYbS4dJAmJocSDNoO6F4fb16ndXk314plfCdR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-14 02:40:24.421711 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b14ae20f--13fb--53c3--906d--34f9f68040ad-osd--block--b14ae20f--13fb--53c3--906d--34f9f68040ad', 'dm-uuid-LVM-bOkJVLp7SZmvorSx9c6SShTcOJL7GkIA1I1O9R0OzDiXPssI7LdzI7YYonqs4jBz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-14 02:40:24.421717 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:40:24.421728 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:40:24.421734 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:40:24.421741 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:40:24.421757 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:40:24.421763 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:40:24.421770 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--22852bcc--228b--503b--9f2d--d63325c20b67-osd--block--22852bcc--228b--503b--9f2d--d63325c20b67', 'dm-uuid-LVM-2vpr9dH9gZeY8gSil9erCIBYNaxeCzrZ1IWCpJEufaeYaIMw4MYdPiXqm21TwVYW'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-14 02:40:24.421777 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:40:24.421783 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fc7bdc9b--bbf6--5512--af7e--0ab125570579-osd--block--fc7bdc9b--bbf6--5512--af7e--0ab125570579', 'dm-uuid-LVM-WUctxpClN6jUduZp73Iv5SahlscRQAQldk84TPtfzN2yibAyIlveTWNy7oqd97jt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-14 02:40:24.421792 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:40:24.421799 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:40:24.421813 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4cd3396-0a08-4c9a-a600-88a027dd3314', 'scsi-SQEMU_QEMU_HARDDISK_f4cd3396-0a08-4c9a-a600-88a027dd3314'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4cd3396-0a08-4c9a-a600-88a027dd3314-part1', 'scsi-SQEMU_QEMU_HARDDISK_f4cd3396-0a08-4c9a-a600-88a027dd3314-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4cd3396-0a08-4c9a-a600-88a027dd3314-part14', 'scsi-SQEMU_QEMU_HARDDISK_f4cd3396-0a08-4c9a-a600-88a027dd3314-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4cd3396-0a08-4c9a-a600-88a027dd3314-part15', 'scsi-SQEMU_QEMU_HARDDISK_f4cd3396-0a08-4c9a-a600-88a027dd3314-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4cd3396-0a08-4c9a-a600-88a027dd3314-part16', 'scsi-SQEMU_QEMU_HARDDISK_f4cd3396-0a08-4c9a-a600-88a027dd3314-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:40:24.421826 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--cb58592c--122c--52e3--870d--c9748cfaa53d-osd--block--cb58592c--122c--52e3--870d--c9748cfaa53d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ITJGCf-nQY9-xUaX-udKR-CHjO-rooj-hoHY6i', 'scsi-0QEMU_QEMU_HARDDISK_1098e660-21c4-40f1-8a57-5405cc8713a2', 'scsi-SQEMU_QEMU_HARDDISK_1098e660-21c4-40f1-8a57-5405cc8713a2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:40:24.421837 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:40:24.421849 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b14ae20f--13fb--53c3--906d--34f9f68040ad-osd--block--b14ae20f--13fb--53c3--906d--34f9f68040ad'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LNOzDv-mwbK-977r-ZNn2-BUfK-Ojs2-FTfoOq', 'scsi-0QEMU_QEMU_HARDDISK_41d88fd2-4f90-4be6-b9c2-0d02d8e1d9f7', 'scsi-SQEMU_QEMU_HARDDISK_41d88fd2-4f90-4be6-b9c2-0d02d8e1d9f7'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:40:24.421858 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:40:24.421893 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37cfb3af-bf99-4b3f-874b-d71467a37a95', 'scsi-SQEMU_QEMU_HARDDISK_37cfb3af-bf99-4b3f-874b-d71467a37a95'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:40:24.421901 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:40:24.421908 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-14-01-42-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:40:24.421922 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:40:24.421928 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:40:24.421933 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:40:24.421938 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:40:24.421950 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:40:24.421960 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb55ef0b-86dc-4261-8469-da65bd85098d', 'scsi-SQEMU_QEMU_HARDDISK_fb55ef0b-86dc-4261-8469-da65bd85098d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb55ef0b-86dc-4261-8469-da65bd85098d-part1', 'scsi-SQEMU_QEMU_HARDDISK_fb55ef0b-86dc-4261-8469-da65bd85098d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb55ef0b-86dc-4261-8469-da65bd85098d-part14', 'scsi-SQEMU_QEMU_HARDDISK_fb55ef0b-86dc-4261-8469-da65bd85098d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb55ef0b-86dc-4261-8469-da65bd85098d-part15', 'scsi-SQEMU_QEMU_HARDDISK_fb55ef0b-86dc-4261-8469-da65bd85098d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb55ef0b-86dc-4261-8469-da65bd85098d-part16', 'scsi-SQEMU_QEMU_HARDDISK_fb55ef0b-86dc-4261-8469-da65bd85098d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:40:24.421971 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4aa0a295--50da--5a6e--9e1c--976797741e16-osd--block--4aa0a295--50da--5a6e--9e1c--976797741e16', 'dm-uuid-LVM-kCdz1cde0pMwsO7F2lzKVC2J1lH2SAPakSAWtUoVyWeUARAfubkYbGmRdDUaBMdb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-14 02:40:24.421977 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--22852bcc--228b--503b--9f2d--d63325c20b67-osd--block--22852bcc--228b--503b--9f2d--d63325c20b67'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-V764Hq-8X16-LNUu-Hl8y-SGFt-xULo-iyh3tC', 'scsi-0QEMU_QEMU_HARDDISK_5f54ee85-b545-45a6-a856-bcb5a8b0ac61', 'scsi-SQEMU_QEMU_HARDDISK_5f54ee85-b545-45a6-a856-bcb5a8b0ac61'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:40:24.421985 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--19540cc4--3279--5090--817a--02eeffb19a16-osd--block--19540cc4--3279--5090--817a--02eeffb19a16', 'dm-uuid-LVM-1KXQdUGVl8VfBLlgnGCchBpMSZqt1xNdXGspfLy96JIM1P11e7FnyOlDaxnhF5Xr'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-14 02:40:24.421991 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--fc7bdc9b--bbf6--5512--af7e--0ab125570579-osd--block--fc7bdc9b--bbf6--5512--af7e--0ab125570579'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-71E2E4-aKB8-J58C-zoT5-3Xr5-ce13-EO3DMe', 'scsi-0QEMU_QEMU_HARDDISK_7ac274fd-1a92-402b-b855-ca6b0ab20cf2', 'scsi-SQEMU_QEMU_HARDDISK_7ac274fd-1a92-402b-b855-ca6b0ab20cf2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:40:24.422004 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d2bee4e-0e3b-437e-a6d5-c0ab15229884', 'scsi-SQEMU_QEMU_HARDDISK_1d2bee4e-0e3b-437e-a6d5-c0ab15229884'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:40:24.422010 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:40:24.422049 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-14-01-42-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:40:24.422057 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:40:24.422063 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:40:24.422068 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:40:24.422074 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:40:24.422082 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:40:24.422095 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:40:24.422100 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:40:24.422110 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:40:24.422116 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2db2252-8503-4549-bea5-ecd40c91a84d', 'scsi-SQEMU_QEMU_HARDDISK_e2db2252-8503-4549-bea5-ecd40c91a84d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2db2252-8503-4549-bea5-ecd40c91a84d-part1', 'scsi-SQEMU_QEMU_HARDDISK_e2db2252-8503-4549-bea5-ecd40c91a84d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2db2252-8503-4549-bea5-ecd40c91a84d-part14', 'scsi-SQEMU_QEMU_HARDDISK_e2db2252-8503-4549-bea5-ecd40c91a84d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2db2252-8503-4549-bea5-ecd40c91a84d-part15', 'scsi-SQEMU_QEMU_HARDDISK_e2db2252-8503-4549-bea5-ecd40c91a84d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e2db2252-8503-4549-bea5-ecd40c91a84d-part16', 'scsi-SQEMU_QEMU_HARDDISK_e2db2252-8503-4549-bea5-ecd40c91a84d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:40:24.422125 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4aa0a295--50da--5a6e--9e1c--976797741e16-osd--block--4aa0a295--50da--5a6e--9e1c--976797741e16'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vZEOLk-gz5S-Ejgr-ohmn-qOyq-pndi-D4F9VL', 'scsi-0QEMU_QEMU_HARDDISK_dfedfdfd-f02f-46ee-b152-0d1db465af93', 'scsi-SQEMU_QEMU_HARDDISK_dfedfdfd-f02f-46ee-b152-0d1db465af93'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:40:24.422139 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--19540cc4--3279--5090--817a--02eeffb19a16-osd--block--19540cc4--3279--5090--817a--02eeffb19a16'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0uiX9e-BgTy-IRkz-qU3A-ZzGp-2nz6-mwR7Nh', 'scsi-0QEMU_QEMU_HARDDISK_b728a659-cffd-44e0-b567-754457aa92dd', 'scsi-SQEMU_QEMU_HARDDISK_b728a659-cffd-44e0-b567-754457aa92dd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:40:24.422150 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0315b34d-7399-4bf5-aad0-c6c82dbe1c9e', 'scsi-SQEMU_QEMU_HARDDISK_0315b34d-7399-4bf5-aad0-c6c82dbe1c9e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:40:24.422156 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-14-01-42-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:40:24.422161 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:40:24.422166 | orchestrator | 2025-05-14 02:40:24.422172 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-05-14 02:40:24.422177 | orchestrator | Wednesday 14 May 2025 02:38:32 +0000 (0:00:00.689) 0:00:18.724 ********* 2025-05-14 02:40:24.422182 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-05-14 02:40:24.422187 | orchestrator | 2025-05-14 02:40:24.422192 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-05-14 02:40:24.422198 | orchestrator | Wednesday 14 May 2025 02:38:33 +0000 (0:00:01.462) 0:00:20.186 ********* 2025-05-14 02:40:24.422203 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:40:24.422208 | orchestrator | 2025-05-14 02:40:24.422214 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-05-14 02:40:24.422219 | orchestrator | Wednesday 14 May 2025 02:38:33 +0000 (0:00:00.144) 0:00:20.331 ********* 2025-05-14 02:40:24.422224 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:40:24.422229 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:40:24.422235 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:40:24.422240 | orchestrator | 2025-05-14 02:40:24.422245 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-05-14 02:40:24.422250 | orchestrator | Wednesday 14 May 2025 02:38:34 +0000 (0:00:00.356) 0:00:20.687 ********* 2025-05-14 02:40:24.422256 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:40:24.422261 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:40:24.422266 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:40:24.422271 | orchestrator | 2025-05-14 02:40:24.422276 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-05-14 02:40:24.422282 | orchestrator | Wednesday 14 May 2025 02:38:34 +0000 (0:00:00.633) 0:00:21.321 ********* 2025-05-14 02:40:24.422291 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:40:24.422297 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:40:24.422302 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:40:24.422307 | orchestrator | 2025-05-14 02:40:24.422312 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-14 02:40:24.422318 | orchestrator | Wednesday 14 May 2025 02:38:35 +0000 (0:00:00.255) 0:00:21.576 ********* 2025-05-14 02:40:24.422323 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:40:24.422328 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:40:24.422333 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:40:24.422338 | orchestrator | 2025-05-14 02:40:24.422343 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-14 02:40:24.422348 | orchestrator | Wednesday 14 May 2025 02:38:35 +0000 (0:00:00.768) 0:00:22.344 ********* 2025-05-14 02:40:24.422353 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:40:24.422359 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:40:24.422364 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:40:24.422369 | orchestrator | 2025-05-14 02:40:24.422377 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-14 02:40:24.422382 | orchestrator | Wednesday 14 May 2025 02:38:36 +0000 (0:00:00.289) 0:00:22.634 ********* 2025-05-14 02:40:24.422387 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:40:24.422392 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:40:24.422397 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:40:24.422403 | orchestrator | 2025-05-14 02:40:24.422408 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-14 02:40:24.422413 | orchestrator | Wednesday 14 May 2025 02:38:36 +0000 (0:00:00.433) 0:00:23.067 ********* 2025-05-14 02:40:24.422418 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:40:24.422423 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:40:24.422428 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:40:24.422433 | orchestrator | 2025-05-14 02:40:24.422439 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-05-14 02:40:24.422444 | orchestrator | Wednesday 14 May 2025 02:38:36 +0000 (0:00:00.292) 0:00:23.360 ********* 2025-05-14 02:40:24.422451 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-14 02:40:24.422460 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-14 02:40:24.422469 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-14 02:40:24.422478 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-14 02:40:24.422487 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:40:24.422496 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-14 02:40:24.422505 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-14 02:40:24.422515 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-14 02:40:24.422520 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-14 02:40:24.422525 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:40:24.422530 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-14 02:40:24.422536 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:40:24.422564 | orchestrator | 2025-05-14 02:40:24.422570 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-05-14 02:40:24.422580 | orchestrator | Wednesday 14 May 2025 02:38:37 +0000 (0:00:01.010) 0:00:24.370 ********* 2025-05-14 02:40:24.422586 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-14 02:40:24.422591 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-14 02:40:24.422596 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-14 02:40:24.422601 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-14 02:40:24.422606 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-14 02:40:24.422611 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:40:24.422621 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-14 02:40:24.422626 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-14 02:40:24.422631 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:40:24.422636 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-14 02:40:24.422642 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-14 02:40:24.422647 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:40:24.422652 | orchestrator | 2025-05-14 02:40:24.422657 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-05-14 02:40:24.422662 | orchestrator | Wednesday 14 May 2025 02:38:38 +0000 (0:00:00.692) 0:00:25.063 ********* 2025-05-14 02:40:24.422667 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-05-14 02:40:24.422672 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-05-14 02:40:24.422678 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-05-14 02:40:24.422683 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-05-14 02:40:24.422687 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-05-14 02:40:24.422692 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-05-14 02:40:24.422697 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-05-14 02:40:24.422702 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-05-14 02:40:24.422707 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-05-14 02:40:24.422712 | orchestrator | 2025-05-14 02:40:24.422717 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-05-14 02:40:24.422722 | orchestrator | Wednesday 14 May 2025 02:38:39 +0000 (0:00:01.452) 0:00:26.515 ********* 2025-05-14 02:40:24.422727 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-14 02:40:24.422732 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-14 02:40:24.422738 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-14 02:40:24.422743 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-14 02:40:24.422748 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-14 02:40:24.422753 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-14 02:40:24.422758 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:40:24.422763 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:40:24.422768 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-14 02:40:24.422773 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-14 02:40:24.422778 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-14 02:40:24.422783 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:40:24.422788 | orchestrator | 2025-05-14 02:40:24.422793 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-05-14 02:40:24.422798 | orchestrator | Wednesday 14 May 2025 02:38:40 +0000 (0:00:00.566) 0:00:27.081 ********* 2025-05-14 02:40:24.422803 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-14 02:40:24.422809 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-14 02:40:24.422814 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-14 02:40:24.422822 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-14 02:40:24.422827 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-14 02:40:24.422832 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:40:24.422837 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-14 02:40:24.422844 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:40:24.422853 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-14 02:40:24.422861 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-14 02:40:24.422868 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-14 02:40:24.422876 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:40:24.422888 | orchestrator | 2025-05-14 02:40:24.422896 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-05-14 02:40:24.422905 | orchestrator | Wednesday 14 May 2025 02:38:40 +0000 (0:00:00.424) 0:00:27.506 ********* 2025-05-14 02:40:24.422914 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-14 02:40:24.422922 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-14 02:40:24.422930 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-14 02:40:24.422938 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:40:24.422946 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-14 02:40:24.422955 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-14 02:40:24.422965 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-14 02:40:24.422974 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:40:24.422984 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-14 02:40:24.423000 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-14 02:40:24.423007 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-14 02:40:24.423012 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:40:24.423017 | orchestrator | 2025-05-14 02:40:24.423023 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-05-14 02:40:24.423039 | orchestrator | Wednesday 14 May 2025 02:38:41 +0000 (0:00:00.477) 0:00:27.984 ********* 2025-05-14 02:40:24.423044 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:40:24.423050 | orchestrator | 2025-05-14 02:40:24.423055 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-14 02:40:24.423060 | orchestrator | Wednesday 14 May 2025 02:38:42 +0000 (0:00:00.750) 0:00:28.735 ********* 2025-05-14 02:40:24.423065 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:40:24.423070 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:40:24.423075 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:40:24.423080 | orchestrator | 2025-05-14 02:40:24.423086 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-14 02:40:24.423091 | orchestrator | Wednesday 14 May 2025 02:38:42 +0000 (0:00:00.322) 0:00:29.057 ********* 2025-05-14 02:40:24.423096 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:40:24.423101 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:40:24.423106 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:40:24.423111 | orchestrator | 2025-05-14 02:40:24.423116 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-14 02:40:24.423122 | orchestrator | Wednesday 14 May 2025 02:38:42 +0000 (0:00:00.429) 0:00:29.487 ********* 2025-05-14 02:40:24.423127 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:40:24.423132 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:40:24.423137 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:40:24.423142 | orchestrator | 2025-05-14 02:40:24.423148 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-14 02:40:24.423153 | orchestrator | Wednesday 14 May 2025 02:38:43 +0000 (0:00:00.431) 0:00:29.919 ********* 2025-05-14 02:40:24.423158 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:40:24.423163 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:40:24.423168 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:40:24.423175 | orchestrator | 2025-05-14 02:40:24.423184 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-14 02:40:24.423193 | orchestrator | Wednesday 14 May 2025 02:38:44 +0000 (0:00:00.762) 0:00:30.681 ********* 2025-05-14 02:40:24.423208 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:40:24.423217 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:40:24.423225 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:40:24.423233 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:40:24.423243 | orchestrator | 2025-05-14 02:40:24.423252 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-14 02:40:24.423261 | orchestrator | Wednesday 14 May 2025 02:38:44 +0000 (0:00:00.417) 0:00:31.099 ********* 2025-05-14 02:40:24.423270 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:40:24.423278 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:40:24.423288 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:40:24.423293 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:40:24.423298 | orchestrator | 2025-05-14 02:40:24.423303 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-14 02:40:24.423309 | orchestrator | Wednesday 14 May 2025 02:38:44 +0000 (0:00:00.414) 0:00:31.514 ********* 2025-05-14 02:40:24.423318 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:40:24.423323 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:40:24.423329 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:40:24.423334 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:40:24.423339 | orchestrator | 2025-05-14 02:40:24.423344 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 02:40:24.423349 | orchestrator | Wednesday 14 May 2025 02:38:45 +0000 (0:00:00.488) 0:00:32.002 ********* 2025-05-14 02:40:24.423355 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:40:24.423360 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:40:24.423366 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:40:24.423371 | orchestrator | 2025-05-14 02:40:24.423376 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-14 02:40:24.423381 | orchestrator | Wednesday 14 May 2025 02:38:45 +0000 (0:00:00.383) 0:00:32.385 ********* 2025-05-14 02:40:24.423387 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-14 02:40:24.423392 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-14 02:40:24.423397 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-14 02:40:24.423402 | orchestrator | 2025-05-14 02:40:24.423407 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-14 02:40:24.423412 | orchestrator | Wednesday 14 May 2025 02:38:46 +0000 (0:00:00.932) 0:00:33.318 ********* 2025-05-14 02:40:24.423418 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:40:24.423423 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:40:24.423428 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:40:24.423433 | orchestrator | 2025-05-14 02:40:24.423438 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-14 02:40:24.423443 | orchestrator | Wednesday 14 May 2025 02:38:47 +0000 (0:00:00.540) 0:00:33.859 ********* 2025-05-14 02:40:24.423449 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:40:24.423454 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:40:24.423459 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:40:24.423465 | orchestrator | 2025-05-14 02:40:24.423470 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-14 02:40:24.423480 | orchestrator | Wednesday 14 May 2025 02:38:47 +0000 (0:00:00.358) 0:00:34.217 ********* 2025-05-14 02:40:24.423486 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-14 02:40:24.423491 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:40:24.423496 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-14 02:40:24.423501 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:40:24.423506 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-14 02:40:24.423511 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:40:24.423517 | orchestrator | 2025-05-14 02:40:24.423528 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-14 02:40:24.423534 | orchestrator | Wednesday 14 May 2025 02:38:48 +0000 (0:00:00.532) 0:00:34.749 ********* 2025-05-14 02:40:24.423577 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-14 02:40:24.423584 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:40:24.423589 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-14 02:40:24.423594 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:40:24.423600 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-14 02:40:24.423605 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:40:24.423610 | orchestrator | 2025-05-14 02:40:24.423616 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-14 02:40:24.423621 | orchestrator | Wednesday 14 May 2025 02:38:48 +0000 (0:00:00.528) 0:00:35.278 ********* 2025-05-14 02:40:24.423626 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-14 02:40:24.423631 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-14 02:40:24.423636 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-14 02:40:24.423641 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-14 02:40:24.423646 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:40:24.423651 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-14 02:40:24.423656 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-14 02:40:24.423661 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-14 02:40:24.423666 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:40:24.423671 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-14 02:40:24.423677 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-14 02:40:24.423682 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:40:24.423687 | orchestrator | 2025-05-14 02:40:24.423692 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-05-14 02:40:24.423697 | orchestrator | Wednesday 14 May 2025 02:38:49 +0000 (0:00:00.849) 0:00:36.128 ********* 2025-05-14 02:40:24.423702 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:40:24.423707 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:40:24.423713 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:40:24.423718 | orchestrator | 2025-05-14 02:40:24.423724 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-05-14 02:40:24.423729 | orchestrator | Wednesday 14 May 2025 02:38:49 +0000 (0:00:00.329) 0:00:36.458 ********* 2025-05-14 02:40:24.423734 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-14 02:40:24.423739 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-14 02:40:24.423745 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-14 02:40:24.423750 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-05-14 02:40:24.423760 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-14 02:40:24.423765 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-14 02:40:24.423770 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-14 02:40:24.423776 | orchestrator | 2025-05-14 02:40:24.423781 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-05-14 02:40:24.423786 | orchestrator | Wednesday 14 May 2025 02:38:50 +0000 (0:00:01.043) 0:00:37.502 ********* 2025-05-14 02:40:24.423791 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-14 02:40:24.423801 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-14 02:40:24.423806 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-14 02:40:24.423812 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-05-14 02:40:24.423821 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-14 02:40:24.423829 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-14 02:40:24.423838 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-14 02:40:24.423846 | orchestrator | 2025-05-14 02:40:24.423856 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-05-14 02:40:24.423862 | orchestrator | Wednesday 14 May 2025 02:38:52 +0000 (0:00:01.767) 0:00:39.269 ********* 2025-05-14 02:40:24.423867 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:40:24.423872 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:40:24.423877 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-05-14 02:40:24.423882 | orchestrator | 2025-05-14 02:40:24.423887 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-05-14 02:40:24.423897 | orchestrator | Wednesday 14 May 2025 02:38:53 +0000 (0:00:00.512) 0:00:39.782 ********* 2025-05-14 02:40:24.423905 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-14 02:40:24.423912 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-14 02:40:24.423917 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-14 02:40:24.423922 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-14 02:40:24.423928 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-14 02:40:24.423933 | orchestrator | 2025-05-14 02:40:24.423937 | orchestrator | TASK [generate keys] *********************************************************** 2025-05-14 02:40:24.423942 | orchestrator | Wednesday 14 May 2025 02:39:34 +0000 (0:00:41.285) 0:01:21.067 ********* 2025-05-14 02:40:24.423947 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 02:40:24.423952 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 02:40:24.423957 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 02:40:24.423962 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 02:40:24.423966 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 02:40:24.423971 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 02:40:24.423976 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-05-14 02:40:24.423984 | orchestrator | 2025-05-14 02:40:24.423989 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-05-14 02:40:24.423994 | orchestrator | Wednesday 14 May 2025 02:39:54 +0000 (0:00:20.215) 0:01:41.283 ********* 2025-05-14 02:40:24.423999 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 02:40:24.424004 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 02:40:24.424012 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 02:40:24.424017 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 02:40:24.424022 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 02:40:24.424027 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 02:40:24.424032 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-14 02:40:24.424036 | orchestrator | 2025-05-14 02:40:24.424041 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-05-14 02:40:24.424047 | orchestrator | Wednesday 14 May 2025 02:40:04 +0000 (0:00:10.127) 0:01:51.411 ********* 2025-05-14 02:40:24.424051 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 02:40:24.424057 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-14 02:40:24.424061 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-14 02:40:24.424066 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 02:40:24.424071 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-14 02:40:24.424075 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-14 02:40:24.424081 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 02:40:24.424085 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-14 02:40:24.424090 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-14 02:40:24.424095 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 02:40:24.424100 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-14 02:40:24.424108 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-14 02:40:24.424114 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 02:40:24.424118 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-14 02:40:24.424123 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-14 02:40:24.424128 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-14 02:40:24.424133 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-14 02:40:24.424138 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-14 02:40:24.424142 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-05-14 02:40:24.424147 | orchestrator | 2025-05-14 02:40:24.424152 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:40:24.424157 | orchestrator | testbed-node-3 : ok=30  changed=2  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-05-14 02:40:24.424166 | orchestrator | testbed-node-4 : ok=20  changed=0 unreachable=0 failed=0 skipped=30  rescued=0 ignored=0 2025-05-14 02:40:24.424173 | orchestrator | testbed-node-5 : ok=25  changed=3  unreachable=0 failed=0 skipped=29  rescued=0 ignored=0 2025-05-14 02:40:24.424186 | orchestrator | 2025-05-14 02:40:24.424194 | orchestrator | 2025-05-14 02:40:24.424202 | orchestrator | 2025-05-14 02:40:24.424209 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:40:24.424217 | orchestrator | Wednesday 14 May 2025 02:40:23 +0000 (0:00:18.489) 0:02:09.900 ********* 2025-05-14 02:40:24.424225 | orchestrator | =============================================================================== 2025-05-14 02:40:24.424233 | orchestrator | create openstack pool(s) ----------------------------------------------- 41.29s 2025-05-14 02:40:24.424241 | orchestrator | generate keys ---------------------------------------------------------- 20.22s 2025-05-14 02:40:24.424249 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.49s 2025-05-14 02:40:24.424257 | orchestrator | get keys from monitors ------------------------------------------------- 10.13s 2025-05-14 02:40:24.424264 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 2.38s 2025-05-14 02:40:24.424271 | orchestrator | ceph-facts : find a running mon container ------------------------------- 2.10s 2025-05-14 02:40:24.424280 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 1.77s 2025-05-14 02:40:24.424288 | orchestrator | ceph-facts : get ceph current status ------------------------------------ 1.46s 2025-05-14 02:40:24.424297 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 1.45s 2025-05-14 02:40:24.424305 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 1.04s 2025-05-14 02:40:24.424313 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 1.01s 2025-05-14 02:40:24.424320 | orchestrator | ceph-facts : set_fact rgw_instances without rgw multisite --------------- 0.93s 2025-05-14 02:40:24.424325 | orchestrator | ceph-facts : set_fact rgw_instances_all --------------------------------- 0.85s 2025-05-14 02:40:24.424330 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.77s 2025-05-14 02:40:24.424335 | orchestrator | ceph-facts : set_fact _radosgw_address to radosgw_address --------------- 0.76s 2025-05-14 02:40:24.424344 | orchestrator | ceph-facts : import_tasks set_radosgw_address.yml ----------------------- 0.75s 2025-05-14 02:40:24.424349 | orchestrator | ceph-facts : set_fact monitor_name ansible_facts['hostname'] ------------ 0.72s 2025-05-14 02:40:24.424354 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.72s 2025-05-14 02:40:24.424359 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.71s 2025-05-14 02:40:24.424365 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 0.69s 2025-05-14 02:40:27.474356 | orchestrator | 2025-05-14 02:40:27 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:40:27.474504 | orchestrator | 2025-05-14 02:40:27 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:40:27.476037 | orchestrator | 2025-05-14 02:40:27 | INFO  | Task 921372fe-118d-4803-ba50-dc7da3dd02a8 is in state STARTED 2025-05-14 02:40:27.476135 | orchestrator | 2025-05-14 02:40:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:40:30.526726 | orchestrator | 2025-05-14 02:40:30 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:40:30.528829 | orchestrator | 2025-05-14 02:40:30 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:40:30.531344 | orchestrator | 2025-05-14 02:40:30 | INFO  | Task 921372fe-118d-4803-ba50-dc7da3dd02a8 is in state STARTED 2025-05-14 02:40:30.531414 | orchestrator | 2025-05-14 02:40:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:40:33.591137 | orchestrator | 2025-05-14 02:40:33 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:40:33.592464 | orchestrator | 2025-05-14 02:40:33 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:40:33.594412 | orchestrator | 2025-05-14 02:40:33 | INFO  | Task 921372fe-118d-4803-ba50-dc7da3dd02a8 is in state STARTED 2025-05-14 02:40:33.594472 | orchestrator | 2025-05-14 02:40:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:40:36.652655 | orchestrator | 2025-05-14 02:40:36 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:40:36.653896 | orchestrator | 2025-05-14 02:40:36 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:40:36.655222 | orchestrator | 2025-05-14 02:40:36 | INFO  | Task 921372fe-118d-4803-ba50-dc7da3dd02a8 is in state STARTED 2025-05-14 02:40:36.657202 | orchestrator | 2025-05-14 02:40:36 | INFO  | Task 912bed5b-6005-4e67-8f4b-c2e24f926e99 is in state STARTED 2025-05-14 02:40:36.657238 | orchestrator | 2025-05-14 02:40:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:40:39.697865 | orchestrator | 2025-05-14 02:40:39 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:40:39.697986 | orchestrator | 2025-05-14 02:40:39 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:40:39.698837 | orchestrator | 2025-05-14 02:40:39 | INFO  | Task 921372fe-118d-4803-ba50-dc7da3dd02a8 is in state STARTED 2025-05-14 02:40:39.699694 | orchestrator | 2025-05-14 02:40:39 | INFO  | Task 912bed5b-6005-4e67-8f4b-c2e24f926e99 is in state STARTED 2025-05-14 02:40:39.699740 | orchestrator | 2025-05-14 02:40:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:40:42.741673 | orchestrator | 2025-05-14 02:40:42 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:40:42.743800 | orchestrator | 2025-05-14 02:40:42 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:40:42.745726 | orchestrator | 2025-05-14 02:40:42 | INFO  | Task 921372fe-118d-4803-ba50-dc7da3dd02a8 is in state STARTED 2025-05-14 02:40:42.747587 | orchestrator | 2025-05-14 02:40:42 | INFO  | Task 912bed5b-6005-4e67-8f4b-c2e24f926e99 is in state STARTED 2025-05-14 02:40:42.747973 | orchestrator | 2025-05-14 02:40:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:40:45.797915 | orchestrator | 2025-05-14 02:40:45 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:40:45.798089 | orchestrator | 2025-05-14 02:40:45 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:40:45.798510 | orchestrator | 2025-05-14 02:40:45 | INFO  | Task 921372fe-118d-4803-ba50-dc7da3dd02a8 is in state STARTED 2025-05-14 02:40:45.800026 | orchestrator | 2025-05-14 02:40:45 | INFO  | Task 912bed5b-6005-4e67-8f4b-c2e24f926e99 is in state STARTED 2025-05-14 02:40:45.800055 | orchestrator | 2025-05-14 02:40:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:40:48.855023 | orchestrator | 2025-05-14 02:40:48 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:40:48.856817 | orchestrator | 2025-05-14 02:40:48 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:40:48.858281 | orchestrator | 2025-05-14 02:40:48 | INFO  | Task 921372fe-118d-4803-ba50-dc7da3dd02a8 is in state STARTED 2025-05-14 02:40:48.859908 | orchestrator | 2025-05-14 02:40:48 | INFO  | Task 912bed5b-6005-4e67-8f4b-c2e24f926e99 is in state STARTED 2025-05-14 02:40:48.860011 | orchestrator | 2025-05-14 02:40:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:40:51.915679 | orchestrator | 2025-05-14 02:40:51 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:40:51.915804 | orchestrator | 2025-05-14 02:40:51 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:40:51.916670 | orchestrator | 2025-05-14 02:40:51 | INFO  | Task 921372fe-118d-4803-ba50-dc7da3dd02a8 is in state STARTED 2025-05-14 02:40:51.917102 | orchestrator | 2025-05-14 02:40:51 | INFO  | Task 912bed5b-6005-4e67-8f4b-c2e24f926e99 is in state STARTED 2025-05-14 02:40:51.917125 | orchestrator | 2025-05-14 02:40:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:40:54.962009 | orchestrator | 2025-05-14 02:40:54 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:40:54.964661 | orchestrator | 2025-05-14 02:40:54 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:40:54.966314 | orchestrator | 2025-05-14 02:40:54 | INFO  | Task 921372fe-118d-4803-ba50-dc7da3dd02a8 is in state STARTED 2025-05-14 02:40:54.968814 | orchestrator | 2025-05-14 02:40:54 | INFO  | Task 912bed5b-6005-4e67-8f4b-c2e24f926e99 is in state STARTED 2025-05-14 02:40:54.971015 | orchestrator | 2025-05-14 02:40:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:40:58.021006 | orchestrator | 2025-05-14 02:40:58 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:40:58.022182 | orchestrator | 2025-05-14 02:40:58 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state STARTED 2025-05-14 02:40:58.023950 | orchestrator | 2025-05-14 02:40:58 | INFO  | Task 921372fe-118d-4803-ba50-dc7da3dd02a8 is in state STARTED 2025-05-14 02:40:58.025389 | orchestrator | 2025-05-14 02:40:58 | INFO  | Task 912bed5b-6005-4e67-8f4b-c2e24f926e99 is in state STARTED 2025-05-14 02:40:58.025425 | orchestrator | 2025-05-14 02:40:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:41:01.074334 | orchestrator | 2025-05-14 02:41:01 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:41:01.075113 | orchestrator | 2025-05-14 02:41:01 | INFO  | Task aad3cf3b-b96c-407c-aeed-f816e9f6fce1 is in state SUCCESS 2025-05-14 02:41:01.076648 | orchestrator | 2025-05-14 02:41:01.076689 | orchestrator | 2025-05-14 02:41:01.076701 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:41:01.076713 | orchestrator | 2025-05-14 02:41:01.076724 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:41:01.076736 | orchestrator | Wednesday 14 May 2025 02:38:26 +0000 (0:00:00.309) 0:00:00.309 ********* 2025-05-14 02:41:01.076747 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:41:01.076759 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:41:01.076770 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:41:01.076781 | orchestrator | 2025-05-14 02:41:01.076792 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:41:01.076803 | orchestrator | Wednesday 14 May 2025 02:38:26 +0000 (0:00:00.393) 0:00:00.703 ********* 2025-05-14 02:41:01.076814 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-05-14 02:41:01.076825 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-05-14 02:41:01.076836 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-05-14 02:41:01.076847 | orchestrator | 2025-05-14 02:41:01.076857 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-05-14 02:41:01.076868 | orchestrator | 2025-05-14 02:41:01.076879 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-14 02:41:01.076891 | orchestrator | Wednesday 14 May 2025 02:38:27 +0000 (0:00:00.310) 0:00:01.013 ********* 2025-05-14 02:41:01.076902 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:41:01.076914 | orchestrator | 2025-05-14 02:41:01.076925 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-05-14 02:41:01.076936 | orchestrator | Wednesday 14 May 2025 02:38:28 +0000 (0:00:00.974) 0:00:01.988 ********* 2025-05-14 02:41:01.076999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 02:41:01.077019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 02:41:01.077675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 02:41:01.077711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-14 02:41:01.077724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-14 02:41:01.077760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-14 02:41:01.077773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 02:41:01.077784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 02:41:01.077802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 02:41:01.077823 | orchestrator | 2025-05-14 02:41:01.077842 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-05-14 02:41:01.077871 | orchestrator | Wednesday 14 May 2025 02:38:30 +0000 (0:00:02.155) 0:00:04.144 ********* 2025-05-14 02:41:01.077890 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-05-14 02:41:01.077908 | orchestrator | 2025-05-14 02:41:01.077928 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-05-14 02:41:01.077948 | orchestrator | Wednesday 14 May 2025 02:38:30 +0000 (0:00:00.578) 0:00:04.722 ********* 2025-05-14 02:41:01.077969 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:41:01.077988 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:41:01.078004 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:41:01.078071 | orchestrator | 2025-05-14 02:41:01.078097 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-05-14 02:41:01.078109 | orchestrator | Wednesday 14 May 2025 02:38:31 +0000 (0:00:00.453) 0:00:05.175 ********* 2025-05-14 02:41:01.078130 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-14 02:41:01.078142 | orchestrator | 2025-05-14 02:41:01.078151 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-14 02:41:01.078161 | orchestrator | Wednesday 14 May 2025 02:38:31 +0000 (0:00:00.474) 0:00:05.650 ********* 2025-05-14 02:41:01.078171 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:41:01.078183 | orchestrator | 2025-05-14 02:41:01.078195 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-05-14 02:41:01.078207 | orchestrator | Wednesday 14 May 2025 02:38:32 +0000 (0:00:00.644) 0:00:06.295 ********* 2025-05-14 02:41:01.078226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 02:41:01.078241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 02:41:01.078268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 02:41:01.078289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-14 02:41:01.078317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-14 02:41:01.078340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-14 02:41:01.078359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 02:41:01.078377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 02:41:01.078396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 02:41:01.078413 | orchestrator | 2025-05-14 02:41:01.078430 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-05-14 02:41:01.078446 | orchestrator | Wednesday 14 May 2025 02:38:35 +0000 (0:00:03.111) 0:00:09.406 ********* 2025-05-14 02:41:01.078476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-14 02:41:01.078509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 02:41:01.078559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-14 02:41:01.078573 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:41:01.078586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-14 02:41:01.078597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 02:41:01.078616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-14 02:41:01.078634 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:41:01.078645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-14 02:41:01.078660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 02:41:01.078671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-14 02:41:01.078681 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:41:01.078690 | orchestrator | 2025-05-14 02:41:01.078700 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-05-14 02:41:01.078710 | orchestrator | Wednesday 14 May 2025 02:38:36 +0000 (0:00:00.594) 0:00:10.001 ********* 2025-05-14 02:41:01.078720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-14 02:41:01.078744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 02:41:01.078755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-14 02:41:01.078771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-14 02:41:01.078782 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:41:01.078793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 02:41:01.078803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-14 02:41:01.078825 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:41:01.078920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-14 02:41:01.078936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 02:41:01.078952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-14 02:41:01.078962 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:41:01.078972 | orchestrator | 2025-05-14 02:41:01.078982 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-05-14 02:41:01.078991 | orchestrator | Wednesday 14 May 2025 02:38:37 +0000 (0:00:00.961) 0:00:10.963 ********* 2025-05-14 02:41:01.079002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 02:41:01.079013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 02:41:01.079039 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 02:41:01.079050 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-14 02:41:01.079065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-14 02:41:01.079075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-14 02:41:01.079085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 02:41:01.079102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 02:41:01.079117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 02:41:01.079128 | orchestrator | 2025-05-14 02:41:01.079137 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-05-14 02:41:01.079147 | orchestrator | Wednesday 14 May 2025 02:38:40 +0000 (0:00:03.489) 0:00:14.452 ********* 2025-05-14 02:41:01.079162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 02:41:01.079173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 02:41:01.079190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 02:41:01.079201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 02:41:01.079217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 02:41:01.079233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 02:41:01.079244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 02:41:01.079254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 02:41:01.079269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 02:41:01.079279 | orchestrator | 2025-05-14 02:41:01.079289 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-05-14 02:41:01.079299 | orchestrator | Wednesday 14 May 2025 02:38:48 +0000 (0:00:07.951) 0:00:22.403 ********* 2025-05-14 02:41:01.079308 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:41:01.079318 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:41:01.079328 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:41:01.079337 | orchestrator | 2025-05-14 02:41:01.079347 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-05-14 02:41:01.079356 | orchestrator | Wednesday 14 May 2025 02:38:51 +0000 (0:00:02.550) 0:00:24.954 ********* 2025-05-14 02:41:01.079366 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:41:01.079376 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:41:01.079385 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:41:01.079394 | orchestrator | 2025-05-14 02:41:01.079408 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-05-14 02:41:01.079418 | orchestrator | Wednesday 14 May 2025 02:38:52 +0000 (0:00:01.572) 0:00:26.527 ********* 2025-05-14 02:41:01.079428 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:41:01.079437 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:41:01.079447 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:41:01.079456 | orchestrator | 2025-05-14 02:41:01.079466 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-05-14 02:41:01.079475 | orchestrator | Wednesday 14 May 2025 02:38:53 +0000 (0:00:00.480) 0:00:27.007 ********* 2025-05-14 02:41:01.079485 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:41:01.079494 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:41:01.079504 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:41:01.079513 | orchestrator | 2025-05-14 02:41:01.079545 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-05-14 02:41:01.079563 | orchestrator | Wednesday 14 May 2025 02:38:53 +0000 (0:00:00.397) 0:00:27.404 ********* 2025-05-14 02:41:01.079579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 02:41:01.079590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 02:41:01.079607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 02:41:01.079618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 02:41:01.079635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 02:41:01.079646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-14 02:41:01.079661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 02:41:01.079677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 02:41:01.079687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 02:41:01.079697 | orchestrator | 2025-05-14 02:41:01.079707 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-14 02:41:01.079717 | orchestrator | Wednesday 14 May 2025 02:38:55 +0000 (0:00:02.365) 0:00:29.770 ********* 2025-05-14 02:41:01.079726 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:41:01.079736 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:41:01.079745 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:41:01.079755 | orchestrator | 2025-05-14 02:41:01.079764 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-05-14 02:41:01.079774 | orchestrator | Wednesday 14 May 2025 02:38:56 +0000 (0:00:00.349) 0:00:30.120 ********* 2025-05-14 02:41:01.079783 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-14 02:41:01.079793 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-14 02:41:01.079808 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-14 02:41:01.079818 | orchestrator | 2025-05-14 02:41:01.079828 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-05-14 02:41:01.079837 | orchestrator | Wednesday 14 May 2025 02:38:58 +0000 (0:00:02.178) 0:00:32.299 ********* 2025-05-14 02:41:01.079847 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-14 02:41:01.079857 | orchestrator | 2025-05-14 02:41:01.079866 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-05-14 02:41:01.079876 | orchestrator | Wednesday 14 May 2025 02:38:59 +0000 (0:00:00.743) 0:00:33.042 ********* 2025-05-14 02:41:01.079885 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:41:01.079895 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:41:01.079904 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:41:01.079914 | orchestrator | 2025-05-14 02:41:01.079923 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-05-14 02:41:01.079933 | orchestrator | Wednesday 14 May 2025 02:39:00 +0000 (0:00:01.228) 0:00:34.271 ********* 2025-05-14 02:41:01.079942 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-14 02:41:01.079952 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-14 02:41:01.079968 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-14 02:41:01.079977 | orchestrator | 2025-05-14 02:41:01.079987 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-05-14 02:41:01.079997 | orchestrator | Wednesday 14 May 2025 02:39:01 +0000 (0:00:01.148) 0:00:35.419 ********* 2025-05-14 02:41:01.080006 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:41:01.080016 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:41:01.080025 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:41:01.080035 | orchestrator | 2025-05-14 02:41:01.080044 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-05-14 02:41:01.080054 | orchestrator | Wednesday 14 May 2025 02:39:01 +0000 (0:00:00.285) 0:00:35.705 ********* 2025-05-14 02:41:01.080064 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-14 02:41:01.080073 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-14 02:41:01.080083 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-14 02:41:01.080098 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-14 02:41:01.080108 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-14 02:41:01.080118 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-14 02:41:01.080128 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-14 02:41:01.080138 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-14 02:41:01.080147 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-14 02:41:01.080157 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-14 02:41:01.080166 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-14 02:41:01.080176 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-14 02:41:01.080185 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-14 02:41:01.080195 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-14 02:41:01.080204 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-14 02:41:01.080214 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-14 02:41:01.080224 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-14 02:41:01.080233 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-14 02:41:01.080243 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-14 02:41:01.080252 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-14 02:41:01.080262 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-14 02:41:01.080271 | orchestrator | 2025-05-14 02:41:01.080281 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-05-14 02:41:01.080290 | orchestrator | Wednesday 14 May 2025 02:39:12 +0000 (0:00:10.981) 0:00:46.686 ********* 2025-05-14 02:41:01.080300 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-14 02:41:01.080309 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-14 02:41:01.080318 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-14 02:41:01.080337 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-14 02:41:01.080346 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-14 02:41:01.080361 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-14 02:41:01.080371 | orchestrator | 2025-05-14 02:41:01.080381 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-05-14 02:41:01.080390 | orchestrator | Wednesday 14 May 2025 02:39:16 +0000 (0:00:03.278) 0:00:49.965 ********* 2025-05-14 02:41:01.080401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 02:41:01.080416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 02:41:01.080428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-14 02:41:01.080439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-14 02:41:01.080463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-14 02:41:01.080481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-14 02:41:01.080504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 02:41:01.080522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 02:41:01.080561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-14 02:41:01.080577 | orchestrator | 2025-05-14 02:41:01.080592 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-14 02:41:01.080606 | orchestrator | Wednesday 14 May 2025 02:39:18 +0000 (0:00:02.857) 0:00:52.823 ********* 2025-05-14 02:41:01.080622 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:41:01.080638 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:41:01.080655 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:41:01.080683 | orchestrator | 2025-05-14 02:41:01.080715 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-05-14 02:41:01.080725 | orchestrator | Wednesday 14 May 2025 02:39:19 +0000 (0:00:00.283) 0:00:53.107 ********* 2025-05-14 02:41:01.080746 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:41:01.080756 | orchestrator | 2025-05-14 02:41:01.080765 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-05-14 02:41:01.080775 | orchestrator | Wednesday 14 May 2025 02:39:21 +0000 (0:00:02.474) 0:00:55.581 ********* 2025-05-14 02:41:01.080784 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:41:01.080794 | orchestrator | 2025-05-14 02:41:01.080803 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-05-14 02:41:01.080813 | orchestrator | Wednesday 14 May 2025 02:39:24 +0000 (0:00:02.349) 0:00:57.931 ********* 2025-05-14 02:41:01.080823 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:41:01.080832 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:41:01.080842 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:41:01.080852 | orchestrator | 2025-05-14 02:41:01.080861 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-05-14 02:41:01.080871 | orchestrator | Wednesday 14 May 2025 02:39:25 +0000 (0:00:01.181) 0:00:59.113 ********* 2025-05-14 02:41:01.080880 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:41:01.080897 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:41:01.080907 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:41:01.080917 | orchestrator | 2025-05-14 02:41:01.080927 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-05-14 02:41:01.080936 | orchestrator | Wednesday 14 May 2025 02:39:25 +0000 (0:00:00.375) 0:00:59.488 ********* 2025-05-14 02:41:01.080946 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:41:01.080956 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:41:01.080965 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:41:01.080975 | orchestrator | 2025-05-14 02:41:01.080984 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-05-14 02:41:01.080994 | orchestrator | Wednesday 14 May 2025 02:39:26 +0000 (0:00:00.532) 0:01:00.021 ********* 2025-05-14 02:41:01.081004 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:41:01.081013 | orchestrator | 2025-05-14 02:41:01.081023 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-05-14 02:41:01.081032 | orchestrator | Wednesday 14 May 2025 02:39:39 +0000 (0:00:13.211) 0:01:13.232 ********* 2025-05-14 02:41:01.081042 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:41:01.081051 | orchestrator | 2025-05-14 02:41:01.081061 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-14 02:41:01.081071 | orchestrator | Wednesday 14 May 2025 02:39:48 +0000 (0:00:09.617) 0:01:22.849 ********* 2025-05-14 02:41:01.081080 | orchestrator | 2025-05-14 02:41:01.081090 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-14 02:41:01.081099 | orchestrator | Wednesday 14 May 2025 02:39:49 +0000 (0:00:00.055) 0:01:22.904 ********* 2025-05-14 02:41:01.081109 | orchestrator | 2025-05-14 02:41:01.081119 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-14 02:41:01.081128 | orchestrator | Wednesday 14 May 2025 02:39:49 +0000 (0:00:00.054) 0:01:22.959 ********* 2025-05-14 02:41:01.081137 | orchestrator | 2025-05-14 02:41:01.081147 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-05-14 02:41:01.081156 | orchestrator | Wednesday 14 May 2025 02:39:49 +0000 (0:00:00.057) 0:01:23.016 ********* 2025-05-14 02:41:01.081166 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:41:01.081176 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:41:01.081185 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:41:01.081195 | orchestrator | 2025-05-14 02:41:01.081204 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-05-14 02:41:01.081220 | orchestrator | Wednesday 14 May 2025 02:39:58 +0000 (0:00:09.723) 0:01:32.740 ********* 2025-05-14 02:41:01.081230 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:41:01.081245 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:41:01.081255 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:41:01.081264 | orchestrator | 2025-05-14 02:41:01.081274 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-05-14 02:41:01.081283 | orchestrator | Wednesday 14 May 2025 02:40:09 +0000 (0:00:10.259) 0:01:43.000 ********* 2025-05-14 02:41:01.081293 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:41:01.081303 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:41:01.081312 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:41:01.081322 | orchestrator | 2025-05-14 02:41:01.081331 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-14 02:41:01.081341 | orchestrator | Wednesday 14 May 2025 02:40:14 +0000 (0:00:05.667) 0:01:48.667 ********* 2025-05-14 02:41:01.081351 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:41:01.081360 | orchestrator | 2025-05-14 02:41:01.081370 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-05-14 02:41:01.081380 | orchestrator | Wednesday 14 May 2025 02:40:15 +0000 (0:00:00.845) 0:01:49.513 ********* 2025-05-14 02:41:01.081389 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:41:01.081399 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:41:01.081409 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:41:01.081419 | orchestrator | 2025-05-14 02:41:01.081428 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-05-14 02:41:01.081438 | orchestrator | Wednesday 14 May 2025 02:40:16 +0000 (0:00:01.028) 0:01:50.541 ********* 2025-05-14 02:41:01.081448 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:41:01.081457 | orchestrator | 2025-05-14 02:41:01.081467 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-05-14 02:41:01.081476 | orchestrator | Wednesday 14 May 2025 02:40:18 +0000 (0:00:01.478) 0:01:52.019 ********* 2025-05-14 02:41:01.081486 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-05-14 02:41:01.081496 | orchestrator | 2025-05-14 02:41:01.081505 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-05-14 02:41:01.081515 | orchestrator | Wednesday 14 May 2025 02:40:28 +0000 (0:00:10.110) 0:02:02.129 ********* 2025-05-14 02:41:01.081607 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-05-14 02:41:01.081622 | orchestrator | 2025-05-14 02:41:01.081632 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-05-14 02:41:01.081643 | orchestrator | Wednesday 14 May 2025 02:40:48 +0000 (0:00:20.152) 0:02:22.282 ********* 2025-05-14 02:41:01.081653 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-05-14 02:41:01.081664 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-05-14 02:41:01.081674 | orchestrator | 2025-05-14 02:41:01.081684 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-05-14 02:41:01.081695 | orchestrator | Wednesday 14 May 2025 02:40:55 +0000 (0:00:07.310) 0:02:29.592 ********* 2025-05-14 02:41:01.081705 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:41:01.081715 | orchestrator | 2025-05-14 02:41:01.081726 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-05-14 02:41:01.081736 | orchestrator | Wednesday 14 May 2025 02:40:55 +0000 (0:00:00.135) 0:02:29.728 ********* 2025-05-14 02:41:01.081746 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:41:01.081756 | orchestrator | 2025-05-14 02:41:01.081784 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-05-14 02:41:01.081803 | orchestrator | Wednesday 14 May 2025 02:40:55 +0000 (0:00:00.116) 0:02:29.844 ********* 2025-05-14 02:41:01.081814 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:41:01.081824 | orchestrator | 2025-05-14 02:41:01.081834 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-05-14 02:41:01.081844 | orchestrator | Wednesday 14 May 2025 02:40:56 +0000 (0:00:00.097) 0:02:29.941 ********* 2025-05-14 02:41:01.081905 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:41:01.081917 | orchestrator | 2025-05-14 02:41:01.081928 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-05-14 02:41:01.081938 | orchestrator | Wednesday 14 May 2025 02:40:56 +0000 (0:00:00.333) 0:02:30.275 ********* 2025-05-14 02:41:01.081948 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:41:01.081958 | orchestrator | 2025-05-14 02:41:01.081969 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-14 02:41:01.081979 | orchestrator | Wednesday 14 May 2025 02:40:59 +0000 (0:00:03.315) 0:02:33.591 ********* 2025-05-14 02:41:01.082000 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:41:01.082010 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:41:01.082071 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:41:01.082083 | orchestrator | 2025-05-14 02:41:01.082093 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:41:01.082103 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-14 02:41:01.082116 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-14 02:41:01.082126 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-14 02:41:01.082136 | orchestrator | 2025-05-14 02:41:01.082146 | orchestrator | 2025-05-14 02:41:01.082157 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:41:01.082167 | orchestrator | Wednesday 14 May 2025 02:41:00 +0000 (0:00:00.578) 0:02:34.169 ********* 2025-05-14 02:41:01.082177 | orchestrator | =============================================================================== 2025-05-14 02:41:01.082192 | orchestrator | service-ks-register : keystone | Creating services --------------------- 20.15s 2025-05-14 02:41:01.082202 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.21s 2025-05-14 02:41:01.082212 | orchestrator | keystone : Copying files for keystone-fernet --------------------------- 10.98s 2025-05-14 02:41:01.082222 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.26s 2025-05-14 02:41:01.082232 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 10.11s 2025-05-14 02:41:01.082242 | orchestrator | keystone : Restart keystone-ssh container ------------------------------- 9.72s 2025-05-14 02:41:01.082252 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.62s 2025-05-14 02:41:01.082262 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 7.95s 2025-05-14 02:41:01.082273 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.31s 2025-05-14 02:41:01.082282 | orchestrator | keystone : Restart keystone container ----------------------------------- 5.67s 2025-05-14 02:41:01.082292 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.49s 2025-05-14 02:41:01.082302 | orchestrator | keystone : Creating default user role ----------------------------------- 3.32s 2025-05-14 02:41:01.082313 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.28s 2025-05-14 02:41:01.082323 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.11s 2025-05-14 02:41:01.082333 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.86s 2025-05-14 02:41:01.082343 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 2.55s 2025-05-14 02:41:01.082354 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.47s 2025-05-14 02:41:01.082371 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.37s 2025-05-14 02:41:01.082394 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.35s 2025-05-14 02:41:01.082415 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.18s 2025-05-14 02:41:01.082442 | orchestrator | 2025-05-14 02:41:01 | INFO  | Task 921372fe-118d-4803-ba50-dc7da3dd02a8 is in state STARTED 2025-05-14 02:41:01.082459 | orchestrator | 2025-05-14 02:41:01 | INFO  | Task 912bed5b-6005-4e67-8f4b-c2e24f926e99 is in state STARTED 2025-05-14 02:41:01.082474 | orchestrator | 2025-05-14 02:41:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:41:04.140565 | orchestrator | 2025-05-14 02:41:04.140664 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-14 02:41:04.140675 | orchestrator | 2025-05-14 02:41:04.140682 | orchestrator | PLAY [Apply role fetch-keys] *************************************************** 2025-05-14 02:41:04.140688 | orchestrator | 2025-05-14 02:41:04.140694 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-05-14 02:41:04.140700 | orchestrator | Wednesday 14 May 2025 02:40:36 +0000 (0:00:00.442) 0:00:00.442 ********* 2025-05-14 02:41:04.140707 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0 2025-05-14 02:41:04.140713 | orchestrator | 2025-05-14 02:41:04.140719 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-05-14 02:41:04.140725 | orchestrator | Wednesday 14 May 2025 02:40:36 +0000 (0:00:00.192) 0:00:00.635 ********* 2025-05-14 02:41:04.140731 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 02:41:04.140747 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-05-14 02:41:04.140753 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-05-14 02:41:04.140766 | orchestrator | 2025-05-14 02:41:04.140772 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-05-14 02:41:04.140778 | orchestrator | Wednesday 14 May 2025 02:40:37 +0000 (0:00:00.741) 0:00:01.376 ********* 2025-05-14 02:41:04.140783 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2025-05-14 02:41:04.140789 | orchestrator | 2025-05-14 02:41:04.140795 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-05-14 02:41:04.140800 | orchestrator | Wednesday 14 May 2025 02:40:37 +0000 (0:00:00.211) 0:00:01.588 ********* 2025-05-14 02:41:04.140806 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:41:04.140813 | orchestrator | 2025-05-14 02:41:04.140819 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-05-14 02:41:04.140824 | orchestrator | Wednesday 14 May 2025 02:40:37 +0000 (0:00:00.552) 0:00:02.140 ********* 2025-05-14 02:41:04.140830 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:41:04.140836 | orchestrator | 2025-05-14 02:41:04.140842 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-05-14 02:41:04.140847 | orchestrator | Wednesday 14 May 2025 02:40:38 +0000 (0:00:00.141) 0:00:02.282 ********* 2025-05-14 02:41:04.140853 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:41:04.140859 | orchestrator | 2025-05-14 02:41:04.140865 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-05-14 02:41:04.140870 | orchestrator | Wednesday 14 May 2025 02:40:38 +0000 (0:00:00.447) 0:00:02.730 ********* 2025-05-14 02:41:04.140876 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:41:04.140882 | orchestrator | 2025-05-14 02:41:04.140888 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-05-14 02:41:04.140893 | orchestrator | Wednesday 14 May 2025 02:40:38 +0000 (0:00:00.128) 0:00:02.858 ********* 2025-05-14 02:41:04.140899 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:41:04.140905 | orchestrator | 2025-05-14 02:41:04.140911 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-05-14 02:41:04.140929 | orchestrator | Wednesday 14 May 2025 02:40:38 +0000 (0:00:00.140) 0:00:02.998 ********* 2025-05-14 02:41:04.140935 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:41:04.140941 | orchestrator | 2025-05-14 02:41:04.140947 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-05-14 02:41:04.140953 | orchestrator | Wednesday 14 May 2025 02:40:38 +0000 (0:00:00.119) 0:00:03.118 ********* 2025-05-14 02:41:04.140973 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:41:04.140979 | orchestrator | 2025-05-14 02:41:04.140985 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-05-14 02:41:04.140991 | orchestrator | Wednesday 14 May 2025 02:40:39 +0000 (0:00:00.132) 0:00:03.250 ********* 2025-05-14 02:41:04.140997 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:41:04.141003 | orchestrator | 2025-05-14 02:41:04.141008 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-14 02:41:04.141014 | orchestrator | Wednesday 14 May 2025 02:40:39 +0000 (0:00:00.114) 0:00:03.365 ********* 2025-05-14 02:41:04.141020 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 02:41:04.141026 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-14 02:41:04.141032 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-14 02:41:04.141038 | orchestrator | 2025-05-14 02:41:04.141044 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-05-14 02:41:04.141049 | orchestrator | Wednesday 14 May 2025 02:40:39 +0000 (0:00:00.745) 0:00:04.110 ********* 2025-05-14 02:41:04.141055 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:41:04.141061 | orchestrator | 2025-05-14 02:41:04.141066 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-05-14 02:41:04.141072 | orchestrator | Wednesday 14 May 2025 02:40:40 +0000 (0:00:00.216) 0:00:04.327 ********* 2025-05-14 02:41:04.141079 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 02:41:04.141086 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-14 02:41:04.141092 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-14 02:41:04.141099 | orchestrator | 2025-05-14 02:41:04.141105 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-05-14 02:41:04.141112 | orchestrator | Wednesday 14 May 2025 02:40:42 +0000 (0:00:01.936) 0:00:06.264 ********* 2025-05-14 02:41:04.141119 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-14 02:41:04.141126 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-14 02:41:04.141132 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-14 02:41:04.141139 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:41:04.141146 | orchestrator | 2025-05-14 02:41:04.141152 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-05-14 02:41:04.141172 | orchestrator | Wednesday 14 May 2025 02:40:42 +0000 (0:00:00.426) 0:00:06.690 ********* 2025-05-14 02:41:04.141181 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-14 02:41:04.141190 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-14 02:41:04.141198 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-14 02:41:04.141204 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:41:04.141211 | orchestrator | 2025-05-14 02:41:04.141218 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-05-14 02:41:04.141225 | orchestrator | Wednesday 14 May 2025 02:40:43 +0000 (0:00:00.797) 0:00:07.488 ********* 2025-05-14 02:41:04.141233 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-14 02:41:04.141246 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-14 02:41:04.141257 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-14 02:41:04.141264 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:41:04.141271 | orchestrator | 2025-05-14 02:41:04.141278 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-05-14 02:41:04.141285 | orchestrator | Wednesday 14 May 2025 02:40:43 +0000 (0:00:00.171) 0:00:07.660 ********* 2025-05-14 02:41:04.141294 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '17727295c928', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-14 02:40:40.722363', 'end': '2025-05-14 02:40:40.766212', 'delta': '0:00:00.043849', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['17727295c928'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-05-14 02:41:04.141304 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '59d5e18c50f5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-14 02:40:41.287447', 'end': '2025-05-14 02:40:41.322356', 'delta': '0:00:00.034909', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['59d5e18c50f5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-05-14 02:41:04.141317 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '02d74a4546cf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-14 02:40:41.851737', 'end': '2025-05-14 02:40:41.903909', 'delta': '0:00:00.052172', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['02d74a4546cf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-05-14 02:41:04.141325 | orchestrator | 2025-05-14 02:41:04.141332 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-05-14 02:41:04.141339 | orchestrator | Wednesday 14 May 2025 02:40:43 +0000 (0:00:00.200) 0:00:07.860 ********* 2025-05-14 02:41:04.141346 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:41:04.141353 | orchestrator | 2025-05-14 02:41:04.141364 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-05-14 02:41:04.141371 | orchestrator | Wednesday 14 May 2025 02:40:43 +0000 (0:00:00.274) 0:00:08.134 ********* 2025-05-14 02:41:04.141378 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2025-05-14 02:41:04.141385 | orchestrator | 2025-05-14 02:41:04.141391 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-05-14 02:41:04.141398 | orchestrator | Wednesday 14 May 2025 02:40:45 +0000 (0:00:01.722) 0:00:09.857 ********* 2025-05-14 02:41:04.141405 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:41:04.141412 | orchestrator | 2025-05-14 02:41:04.141419 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-05-14 02:41:04.141425 | orchestrator | Wednesday 14 May 2025 02:40:45 +0000 (0:00:00.149) 0:00:10.006 ********* 2025-05-14 02:41:04.141432 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:41:04.141439 | orchestrator | 2025-05-14 02:41:04.141445 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-14 02:41:04.141451 | orchestrator | Wednesday 14 May 2025 02:40:46 +0000 (0:00:00.217) 0:00:10.223 ********* 2025-05-14 02:41:04.141457 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:41:04.141462 | orchestrator | 2025-05-14 02:41:04.141468 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-05-14 02:41:04.141474 | orchestrator | Wednesday 14 May 2025 02:40:46 +0000 (0:00:00.130) 0:00:10.354 ********* 2025-05-14 02:41:04.141479 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:41:04.141485 | orchestrator | 2025-05-14 02:41:04.141491 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-05-14 02:41:04.141496 | orchestrator | Wednesday 14 May 2025 02:40:46 +0000 (0:00:00.135) 0:00:10.490 ********* 2025-05-14 02:41:04.141502 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:41:04.141507 | orchestrator | 2025-05-14 02:41:04.141513 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-14 02:41:04.141519 | orchestrator | Wednesday 14 May 2025 02:40:46 +0000 (0:00:00.276) 0:00:10.767 ********* 2025-05-14 02:41:04.141586 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:41:04.141595 | orchestrator | 2025-05-14 02:41:04.141604 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-05-14 02:41:04.141613 | orchestrator | Wednesday 14 May 2025 02:40:46 +0000 (0:00:00.127) 0:00:10.895 ********* 2025-05-14 02:41:04.141622 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:41:04.141631 | orchestrator | 2025-05-14 02:41:04.141641 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-05-14 02:41:04.141650 | orchestrator | Wednesday 14 May 2025 02:40:46 +0000 (0:00:00.124) 0:00:11.019 ********* 2025-05-14 02:41:04.141659 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:41:04.141668 | orchestrator | 2025-05-14 02:41:04.141677 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-05-14 02:41:04.141686 | orchestrator | Wednesday 14 May 2025 02:40:46 +0000 (0:00:00.130) 0:00:11.150 ********* 2025-05-14 02:41:04.141695 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:41:04.141704 | orchestrator | 2025-05-14 02:41:04.141713 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-05-14 02:41:04.141722 | orchestrator | Wednesday 14 May 2025 02:40:47 +0000 (0:00:00.129) 0:00:11.280 ********* 2025-05-14 02:41:04.141731 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:41:04.141740 | orchestrator | 2025-05-14 02:41:04.141749 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-05-14 02:41:04.141758 | orchestrator | Wednesday 14 May 2025 02:40:47 +0000 (0:00:00.140) 0:00:11.420 ********* 2025-05-14 02:41:04.141768 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:41:04.141777 | orchestrator | 2025-05-14 02:41:04.141786 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-14 02:41:04.141795 | orchestrator | Wednesday 14 May 2025 02:40:47 +0000 (0:00:00.315) 0:00:11.736 ********* 2025-05-14 02:41:04.141804 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:41:04.141821 | orchestrator | 2025-05-14 02:41:04.141829 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-05-14 02:41:04.141838 | orchestrator | Wednesday 14 May 2025 02:40:47 +0000 (0:00:00.135) 0:00:11.872 ********* 2025-05-14 02:41:04.141848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:41:04.141866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:41:04.141876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:41:04.141887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:41:04.141897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:41:04.141908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:41:04.141918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:41:04.141929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-14 02:41:04.142101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d6958c45-3c69-4688-be65-10947b181749', 'scsi-SQEMU_QEMU_HARDDISK_d6958c45-3c69-4688-be65-10947b181749'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d6958c45-3c69-4688-be65-10947b181749-part1', 'scsi-SQEMU_QEMU_HARDDISK_d6958c45-3c69-4688-be65-10947b181749-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d6958c45-3c69-4688-be65-10947b181749-part14', 'scsi-SQEMU_QEMU_HARDDISK_d6958c45-3c69-4688-be65-10947b181749-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d6958c45-3c69-4688-be65-10947b181749-part15', 'scsi-SQEMU_QEMU_HARDDISK_d6958c45-3c69-4688-be65-10947b181749-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d6958c45-3c69-4688-be65-10947b181749-part16', 'scsi-SQEMU_QEMU_HARDDISK_d6958c45-3c69-4688-be65-10947b181749-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:41:04.142138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-14-01-42-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-14 02:41:04.142150 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:41:04.142161 | orchestrator | 2025-05-14 02:41:04.142168 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-05-14 02:41:04.142174 | orchestrator | Wednesday 14 May 2025 02:40:47 +0000 (0:00:00.253) 0:00:12.125 ********* 2025-05-14 02:41:04.142179 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:41:04.142185 | orchestrator | 2025-05-14 02:41:04.142191 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-05-14 02:41:04.142197 | orchestrator | Wednesday 14 May 2025 02:40:48 +0000 (0:00:00.261) 0:00:12.387 ********* 2025-05-14 02:41:04.142203 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:41:04.142208 | orchestrator | 2025-05-14 02:41:04.142214 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-05-14 02:41:04.142220 | orchestrator | Wednesday 14 May 2025 02:40:48 +0000 (0:00:00.137) 0:00:12.525 ********* 2025-05-14 02:41:04.142226 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:41:04.142232 | orchestrator | 2025-05-14 02:41:04.142237 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-05-14 02:41:04.142246 | orchestrator | Wednesday 14 May 2025 02:40:48 +0000 (0:00:00.136) 0:00:12.661 ********* 2025-05-14 02:41:04.142252 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:41:04.142258 | orchestrator | 2025-05-14 02:41:04.142264 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-05-14 02:41:04.142270 | orchestrator | Wednesday 14 May 2025 02:40:48 +0000 (0:00:00.484) 0:00:13.145 ********* 2025-05-14 02:41:04.142280 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:41:04.142286 | orchestrator | 2025-05-14 02:41:04.142292 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-14 02:41:04.142298 | orchestrator | Wednesday 14 May 2025 02:40:49 +0000 (0:00:00.121) 0:00:13.267 ********* 2025-05-14 02:41:04.142304 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:41:04.142309 | orchestrator | 2025-05-14 02:41:04.142315 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-14 02:41:04.142321 | orchestrator | Wednesday 14 May 2025 02:40:49 +0000 (0:00:00.488) 0:00:13.756 ********* 2025-05-14 02:41:04.142327 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:41:04.142333 | orchestrator | 2025-05-14 02:41:04.142339 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-14 02:41:04.142345 | orchestrator | Wednesday 14 May 2025 02:40:49 +0000 (0:00:00.135) 0:00:13.892 ********* 2025-05-14 02:41:04.142350 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:41:04.142356 | orchestrator | 2025-05-14 02:41:04.142362 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-14 02:41:04.142368 | orchestrator | Wednesday 14 May 2025 02:40:49 +0000 (0:00:00.228) 0:00:14.120 ********* 2025-05-14 02:41:04.142373 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:41:04.142379 | orchestrator | 2025-05-14 02:41:04.142385 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-05-14 02:41:04.142391 | orchestrator | Wednesday 14 May 2025 02:40:50 +0000 (0:00:00.368) 0:00:14.489 ********* 2025-05-14 02:41:04.142397 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-14 02:41:04.142402 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-14 02:41:04.142408 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-14 02:41:04.142414 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:41:04.142420 | orchestrator | 2025-05-14 02:41:04.142425 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-05-14 02:41:04.142431 | orchestrator | Wednesday 14 May 2025 02:40:50 +0000 (0:00:00.482) 0:00:14.971 ********* 2025-05-14 02:41:04.142437 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-14 02:41:04.142442 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-14 02:41:04.142448 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-14 02:41:04.142454 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:41:04.142460 | orchestrator | 2025-05-14 02:41:04.142473 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-05-14 02:41:04.142479 | orchestrator | Wednesday 14 May 2025 02:40:51 +0000 (0:00:00.461) 0:00:15.433 ********* 2025-05-14 02:41:04.142486 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 02:41:04.142491 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-14 02:41:04.142497 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-14 02:41:04.142503 | orchestrator | 2025-05-14 02:41:04.142509 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-05-14 02:41:04.142515 | orchestrator | Wednesday 14 May 2025 02:40:52 +0000 (0:00:01.165) 0:00:16.598 ********* 2025-05-14 02:41:04.142520 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-14 02:41:04.142543 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-14 02:41:04.142549 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-14 02:41:04.142555 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:41:04.142561 | orchestrator | 2025-05-14 02:41:04.142566 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-05-14 02:41:04.142572 | orchestrator | Wednesday 14 May 2025 02:40:52 +0000 (0:00:00.264) 0:00:16.863 ********* 2025-05-14 02:41:04.142578 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-14 02:41:04.142584 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-14 02:41:04.142590 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-14 02:41:04.142600 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:41:04.142606 | orchestrator | 2025-05-14 02:41:04.142612 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-05-14 02:41:04.142617 | orchestrator | Wednesday 14 May 2025 02:40:52 +0000 (0:00:00.212) 0:00:17.076 ********* 2025-05-14 02:41:04.142623 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-05-14 02:41:04.142629 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-14 02:41:04.142636 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-14 02:41:04.142641 | orchestrator | 2025-05-14 02:41:04.142647 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-05-14 02:41:04.142653 | orchestrator | Wednesday 14 May 2025 02:40:53 +0000 (0:00:00.214) 0:00:17.290 ********* 2025-05-14 02:41:04.142659 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:41:04.142665 | orchestrator | 2025-05-14 02:41:04.142670 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-05-14 02:41:04.142676 | orchestrator | Wednesday 14 May 2025 02:40:53 +0000 (0:00:00.113) 0:00:17.404 ********* 2025-05-14 02:41:04.142682 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:41:04.142688 | orchestrator | 2025-05-14 02:41:04.142694 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-05-14 02:41:04.142699 | orchestrator | Wednesday 14 May 2025 02:40:53 +0000 (0:00:00.140) 0:00:17.545 ********* 2025-05-14 02:41:04.142705 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 02:41:04.142714 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-14 02:41:04.142720 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-14 02:41:04.142726 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-14 02:41:04.142732 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-14 02:41:04.142738 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-14 02:41:04.142743 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-14 02:41:04.142749 | orchestrator | 2025-05-14 02:41:04.142755 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-05-14 02:41:04.142761 | orchestrator | Wednesday 14 May 2025 02:40:54 +0000 (0:00:01.214) 0:00:18.759 ********* 2025-05-14 02:41:04.142766 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-14 02:41:04.142772 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-14 02:41:04.142778 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-14 02:41:04.142784 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-14 02:41:04.142789 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-14 02:41:04.142795 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-14 02:41:04.142801 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-14 02:41:04.142806 | orchestrator | 2025-05-14 02:41:04.142812 | orchestrator | TASK [ceph-fetch-keys : lookup keys in /etc/ceph] ****************************** 2025-05-14 02:41:04.142818 | orchestrator | Wednesday 14 May 2025 02:40:56 +0000 (0:00:01.521) 0:00:20.281 ********* 2025-05-14 02:41:04.142824 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:41:04.142829 | orchestrator | 2025-05-14 02:41:04.142835 | orchestrator | TASK [ceph-fetch-keys : create a local fetch directory if it does not exist] *** 2025-05-14 02:41:04.142841 | orchestrator | Wednesday 14 May 2025 02:40:56 +0000 (0:00:00.457) 0:00:20.738 ********* 2025-05-14 02:41:04.142847 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-14 02:41:04.142857 | orchestrator | 2025-05-14 02:41:04.142863 | orchestrator | TASK [ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/] *** 2025-05-14 02:41:04.142869 | orchestrator | Wednesday 14 May 2025 02:40:57 +0000 (0:00:00.548) 0:00:21.287 ********* 2025-05-14 02:41:04.142879 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.admin.keyring) 2025-05-14 02:41:04.142885 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder-backup.keyring) 2025-05-14 02:41:04.142891 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder.keyring) 2025-05-14 02:41:04.142897 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.crash.keyring) 2025-05-14 02:41:04.142902 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.glance.keyring) 2025-05-14 02:41:04.142908 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.gnocchi.keyring) 2025-05-14 02:41:04.142914 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.manila.keyring) 2025-05-14 02:41:04.142919 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.nova.keyring) 2025-05-14 02:41:04.142925 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-0.keyring) 2025-05-14 02:41:04.142931 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-1.keyring) 2025-05-14 02:41:04.142936 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-2.keyring) 2025-05-14 02:41:04.142942 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mon.keyring) 2025-05-14 02:41:04.142948 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) 2025-05-14 02:41:04.142953 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) 2025-05-14 02:41:04.142959 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) 2025-05-14 02:41:04.142965 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) 2025-05-14 02:41:04.142970 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr/ceph.keyring) 2025-05-14 02:41:04.142976 | orchestrator | 2025-05-14 02:41:04.142982 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:41:04.142987 | orchestrator | testbed-node-0 : ok=28  changed=3  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-05-14 02:41:04.142994 | orchestrator | 2025-05-14 02:41:04.143000 | orchestrator | 2025-05-14 02:41:04.143006 | orchestrator | 2025-05-14 02:41:04.143011 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:41:04.143017 | orchestrator | Wednesday 14 May 2025 02:41:03 +0000 (0:00:06.339) 0:00:27.626 ********* 2025-05-14 02:41:04.143022 | orchestrator | =============================================================================== 2025-05-14 02:41:04.143028 | orchestrator | ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/ --- 6.34s 2025-05-14 02:41:04.143034 | orchestrator | ceph-facts : find a running mon container ------------------------------- 1.94s 2025-05-14 02:41:04.143043 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 1.72s 2025-05-14 02:41:04.143049 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 1.52s 2025-05-14 02:41:04.143055 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 1.21s 2025-05-14 02:41:04.143060 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 1.17s 2025-05-14 02:41:04.143066 | orchestrator | ceph-facts : check if the ceph mon socket is in-use --------------------- 0.80s 2025-05-14 02:41:04.143072 | orchestrator | ceph-facts : set_fact monitor_name ansible_facts['hostname'] ------------ 0.75s 2025-05-14 02:41:04.143077 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.74s 2025-05-14 02:41:04.143083 | orchestrator | ceph-facts : check if it is atomic host --------------------------------- 0.55s 2025-05-14 02:41:04.143094 | orchestrator | ceph-fetch-keys : create a local fetch directory if it does not exist --- 0.55s 2025-05-14 02:41:04.143099 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.49s 2025-05-14 02:41:04.143105 | orchestrator | ceph-facts : check if the ceph conf exists ------------------------------ 0.48s 2025-05-14 02:41:04.143111 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 0.48s 2025-05-14 02:41:04.143116 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 0.46s 2025-05-14 02:41:04.143122 | orchestrator | ceph-fetch-keys : lookup keys in /etc/ceph ------------------------------ 0.46s 2025-05-14 02:41:04.143128 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.45s 2025-05-14 02:41:04.143133 | orchestrator | ceph-facts : check for a ceph mon socket -------------------------------- 0.43s 2025-05-14 02:41:04.143139 | orchestrator | ceph-facts : set osd_pool_default_crush_rule fact ----------------------- 0.37s 2025-05-14 02:41:04.143145 | orchestrator | ceph-facts : resolve bluestore_wal_device link(s) ----------------------- 0.32s 2025-05-14 02:41:04.143151 | orchestrator | 2025-05-14 02:41:04 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:41:04.143157 | orchestrator | 2025-05-14 02:41:04 | INFO  | Task 921372fe-118d-4803-ba50-dc7da3dd02a8 is in state STARTED 2025-05-14 02:41:04.143162 | orchestrator | 2025-05-14 02:41:04 | INFO  | Task 912bed5b-6005-4e67-8f4b-c2e24f926e99 is in state SUCCESS 2025-05-14 02:41:04.143168 | orchestrator | 2025-05-14 02:41:04 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:41:04.143177 | orchestrator | 2025-05-14 02:41:04 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:41:04.166991 | orchestrator | 2025-05-14 02:41:04 | INFO  | Task 33019746-b9ef-495b-9f80-b37c5852f4dc is in state STARTED 2025-05-14 02:41:04.167069 | orchestrator | 2025-05-14 02:41:04 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:41:04.167079 | orchestrator | 2025-05-14 02:41:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:41:07.177934 | orchestrator | 2025-05-14 02:41:07 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:41:07.178942 | orchestrator | 2025-05-14 02:41:07 | INFO  | Task 921372fe-118d-4803-ba50-dc7da3dd02a8 is in state STARTED 2025-05-14 02:41:07.181225 | orchestrator | 2025-05-14 02:41:07 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:41:07.183870 | orchestrator | 2025-05-14 02:41:07 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:41:07.186253 | orchestrator | 2025-05-14 02:41:07 | INFO  | Task 33019746-b9ef-495b-9f80-b37c5852f4dc is in state STARTED 2025-05-14 02:41:07.190169 | orchestrator | 2025-05-14 02:41:07 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:41:07.190221 | orchestrator | 2025-05-14 02:41:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:41:10.233147 | orchestrator | 2025-05-14 02:41:10 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:41:10.233676 | orchestrator | 2025-05-14 02:41:10 | INFO  | Task 921372fe-118d-4803-ba50-dc7da3dd02a8 is in state SUCCESS 2025-05-14 02:41:10.235632 | orchestrator | 2025-05-14 02:41:10 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:41:10.237399 | orchestrator | 2025-05-14 02:41:10 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:41:10.239382 | orchestrator | 2025-05-14 02:41:10 | INFO  | Task 3ba06c69-1da8-4d1b-ac2c-fe16828c7e54 is in state STARTED 2025-05-14 02:41:10.240811 | orchestrator | 2025-05-14 02:41:10 | INFO  | Task 33019746-b9ef-495b-9f80-b37c5852f4dc is in state STARTED 2025-05-14 02:41:10.242450 | orchestrator | 2025-05-14 02:41:10 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:41:10.242631 | orchestrator | 2025-05-14 02:41:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:41:13.284511 | orchestrator | 2025-05-14 02:41:13 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:41:13.287881 | orchestrator | 2025-05-14 02:41:13 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:41:13.290412 | orchestrator | 2025-05-14 02:41:13 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:41:13.291440 | orchestrator | 2025-05-14 02:41:13 | INFO  | Task 3ba06c69-1da8-4d1b-ac2c-fe16828c7e54 is in state STARTED 2025-05-14 02:41:13.293272 | orchestrator | 2025-05-14 02:41:13 | INFO  | Task 33019746-b9ef-495b-9f80-b37c5852f4dc is in state STARTED 2025-05-14 02:41:13.294749 | orchestrator | 2025-05-14 02:41:13 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:41:13.294794 | orchestrator | 2025-05-14 02:41:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:41:16.325752 | orchestrator | 2025-05-14 02:41:16 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:41:16.327869 | orchestrator | 2025-05-14 02:41:16 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:41:16.328996 | orchestrator | 2025-05-14 02:41:16 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:41:16.330370 | orchestrator | 2025-05-14 02:41:16 | INFO  | Task 3ba06c69-1da8-4d1b-ac2c-fe16828c7e54 is in state STARTED 2025-05-14 02:41:16.332935 | orchestrator | 2025-05-14 02:41:16 | INFO  | Task 33019746-b9ef-495b-9f80-b37c5852f4dc is in state STARTED 2025-05-14 02:41:16.335692 | orchestrator | 2025-05-14 02:41:16 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:41:16.335752 | orchestrator | 2025-05-14 02:41:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:41:19.372694 | orchestrator | 2025-05-14 02:41:19 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:41:19.375266 | orchestrator | 2025-05-14 02:41:19 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:41:19.376340 | orchestrator | 2025-05-14 02:41:19 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:41:19.377350 | orchestrator | 2025-05-14 02:41:19 | INFO  | Task 3ba06c69-1da8-4d1b-ac2c-fe16828c7e54 is in state STARTED 2025-05-14 02:41:19.378806 | orchestrator | 2025-05-14 02:41:19 | INFO  | Task 33019746-b9ef-495b-9f80-b37c5852f4dc is in state STARTED 2025-05-14 02:41:19.379148 | orchestrator | 2025-05-14 02:41:19 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:41:19.379202 | orchestrator | 2025-05-14 02:41:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:41:22.415885 | orchestrator | 2025-05-14 02:41:22 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:41:22.416443 | orchestrator | 2025-05-14 02:41:22 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:41:22.417241 | orchestrator | 2025-05-14 02:41:22 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:41:22.418152 | orchestrator | 2025-05-14 02:41:22 | INFO  | Task 3ba06c69-1da8-4d1b-ac2c-fe16828c7e54 is in state STARTED 2025-05-14 02:41:22.419049 | orchestrator | 2025-05-14 02:41:22 | INFO  | Task 33019746-b9ef-495b-9f80-b37c5852f4dc is in state STARTED 2025-05-14 02:41:22.420011 | orchestrator | 2025-05-14 02:41:22 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:41:22.420044 | orchestrator | 2025-05-14 02:41:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:41:25.458804 | orchestrator | 2025-05-14 02:41:25 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:41:25.461022 | orchestrator | 2025-05-14 02:41:25 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:41:25.462690 | orchestrator | 2025-05-14 02:41:25 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:41:25.465021 | orchestrator | 2025-05-14 02:41:25 | INFO  | Task 3ba06c69-1da8-4d1b-ac2c-fe16828c7e54 is in state STARTED 2025-05-14 02:41:25.467377 | orchestrator | 2025-05-14 02:41:25 | INFO  | Task 33019746-b9ef-495b-9f80-b37c5852f4dc is in state STARTED 2025-05-14 02:41:25.469036 | orchestrator | 2025-05-14 02:41:25 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:41:25.469476 | orchestrator | 2025-05-14 02:41:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:41:28.504427 | orchestrator | 2025-05-14 02:41:28 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:41:28.505826 | orchestrator | 2025-05-14 02:41:28 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:41:28.507442 | orchestrator | 2025-05-14 02:41:28 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:41:28.509075 | orchestrator | 2025-05-14 02:41:28 | INFO  | Task 3ba06c69-1da8-4d1b-ac2c-fe16828c7e54 is in state STARTED 2025-05-14 02:41:28.510122 | orchestrator | 2025-05-14 02:41:28 | INFO  | Task 33019746-b9ef-495b-9f80-b37c5852f4dc is in state STARTED 2025-05-14 02:41:28.511826 | orchestrator | 2025-05-14 02:41:28 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:41:28.511861 | orchestrator | 2025-05-14 02:41:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:41:31.548146 | orchestrator | 2025-05-14 02:41:31 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:41:31.552824 | orchestrator | 2025-05-14 02:41:31 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:41:31.555006 | orchestrator | 2025-05-14 02:41:31 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:41:31.558145 | orchestrator | 2025-05-14 02:41:31 | INFO  | Task 3ba06c69-1da8-4d1b-ac2c-fe16828c7e54 is in state STARTED 2025-05-14 02:41:31.560170 | orchestrator | 2025-05-14 02:41:31 | INFO  | Task 33019746-b9ef-495b-9f80-b37c5852f4dc is in state STARTED 2025-05-14 02:41:31.562185 | orchestrator | 2025-05-14 02:41:31 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:41:31.562463 | orchestrator | 2025-05-14 02:41:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:41:34.616998 | orchestrator | 2025-05-14 02:41:34 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:41:34.617218 | orchestrator | 2025-05-14 02:41:34 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:41:34.618779 | orchestrator | 2025-05-14 02:41:34 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:41:34.620628 | orchestrator | 2025-05-14 02:41:34 | INFO  | Task 3ba06c69-1da8-4d1b-ac2c-fe16828c7e54 is in state STARTED 2025-05-14 02:41:34.621394 | orchestrator | 2025-05-14 02:41:34 | INFO  | Task 33019746-b9ef-495b-9f80-b37c5852f4dc is in state STARTED 2025-05-14 02:41:34.622418 | orchestrator | 2025-05-14 02:41:34 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:41:34.622484 | orchestrator | 2025-05-14 02:41:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:41:37.663214 | orchestrator | 2025-05-14 02:41:37 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:41:37.663977 | orchestrator | 2025-05-14 02:41:37 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:41:37.667855 | orchestrator | 2025-05-14 02:41:37 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:41:37.667890 | orchestrator | 2025-05-14 02:41:37 | INFO  | Task 3ba06c69-1da8-4d1b-ac2c-fe16828c7e54 is in state STARTED 2025-05-14 02:41:37.667903 | orchestrator | 2025-05-14 02:41:37 | INFO  | Task 33019746-b9ef-495b-9f80-b37c5852f4dc is in state STARTED 2025-05-14 02:41:37.668818 | orchestrator | 2025-05-14 02:41:37 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:41:37.668840 | orchestrator | 2025-05-14 02:41:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:41:40.726079 | orchestrator | 2025-05-14 02:41:40 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:41:40.727487 | orchestrator | 2025-05-14 02:41:40 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:41:40.730882 | orchestrator | 2025-05-14 02:41:40 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:41:40.732869 | orchestrator | 2025-05-14 02:41:40 | INFO  | Task 43992d32-c4ee-421a-8784-d3f56ded7c6b is in state STARTED 2025-05-14 02:41:40.735254 | orchestrator | 2025-05-14 02:41:40 | INFO  | Task 3ba06c69-1da8-4d1b-ac2c-fe16828c7e54 is in state STARTED 2025-05-14 02:41:40.736500 | orchestrator | 2025-05-14 02:41:40.736586 | orchestrator | 2025-05-14 02:41:40.736607 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-05-14 02:41:40.736627 | orchestrator | 2025-05-14 02:41:40.736645 | orchestrator | TASK [Check ceph keys] ********************************************************* 2025-05-14 02:41:40.736664 | orchestrator | Wednesday 14 May 2025 02:40:26 +0000 (0:00:00.140) 0:00:00.140 ********* 2025-05-14 02:41:40.736710 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-05-14 02:41:40.736730 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-14 02:41:40.736749 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-14 02:41:40.736767 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-05-14 02:41:40.736784 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-14 02:41:40.736803 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-05-14 02:41:40.736822 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-05-14 02:41:40.736840 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-05-14 02:41:40.736859 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-05-14 02:41:40.736877 | orchestrator | 2025-05-14 02:41:40.736895 | orchestrator | TASK [Set _fetch_ceph_keys fact] *********************************************** 2025-05-14 02:41:40.736913 | orchestrator | Wednesday 14 May 2025 02:40:30 +0000 (0:00:03.014) 0:00:03.154 ********* 2025-05-14 02:41:40.736931 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-05-14 02:41:40.736949 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-14 02:41:40.736967 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-14 02:41:40.737024 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-05-14 02:41:40.737045 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-14 02:41:40.737063 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-05-14 02:41:40.737081 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-05-14 02:41:40.737101 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-05-14 02:41:40.737120 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-05-14 02:41:40.737139 | orchestrator | 2025-05-14 02:41:40.737159 | orchestrator | TASK [Point out that the following task takes some time and does not give any output] *** 2025-05-14 02:41:40.737176 | orchestrator | Wednesday 14 May 2025 02:40:30 +0000 (0:00:00.235) 0:00:03.390 ********* 2025-05-14 02:41:40.737194 | orchestrator | ok: [testbed-manager] => { 2025-05-14 02:41:40.737216 | orchestrator |  "msg": "The task 'Fetch ceph keys from the first monitor node' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete." 2025-05-14 02:41:40.737237 | orchestrator | } 2025-05-14 02:41:40.737257 | orchestrator | 2025-05-14 02:41:40.737274 | orchestrator | TASK [Fetch ceph keys from the first monitor node] ***************************** 2025-05-14 02:41:40.737293 | orchestrator | Wednesday 14 May 2025 02:40:30 +0000 (0:00:00.159) 0:00:03.550 ********* 2025-05-14 02:41:40.737310 | orchestrator | changed: [testbed-manager] 2025-05-14 02:41:40.737329 | orchestrator | 2025-05-14 02:41:40.737340 | orchestrator | TASK [Copy ceph infrastructure keys to the configuration repository] *********** 2025-05-14 02:41:40.737351 | orchestrator | Wednesday 14 May 2025 02:41:04 +0000 (0:00:33.868) 0:00:37.419 ********* 2025-05-14 02:41:40.737363 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.admin.keyring', 'dest': '/opt/configuration/environments/infrastructure/files/ceph/ceph.client.admin.keyring'}) 2025-05-14 02:41:40.737374 | orchestrator | 2025-05-14 02:41:40.737384 | orchestrator | TASK [Copy ceph kolla keys to the configuration repository] ******************** 2025-05-14 02:41:40.737395 | orchestrator | Wednesday 14 May 2025 02:41:04 +0000 (0:00:00.523) 0:00:37.942 ********* 2025-05-14 02:41:40.737407 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume/ceph.client.cinder.keyring'}) 2025-05-14 02:41:40.737419 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder.keyring'}) 2025-05-14 02:41:40.737430 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder-backup.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder-backup.keyring'}) 2025-05-14 02:41:40.737448 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.cinder.keyring'}) 2025-05-14 02:41:40.737460 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.nova.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.nova.keyring'}) 2025-05-14 02:41:40.737488 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.glance.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/glance/ceph.client.glance.keyring'}) 2025-05-14 02:41:40.737991 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.gnocchi.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/gnocchi/ceph.client.gnocchi.keyring'}) 2025-05-14 02:41:40.738167 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.manila.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/manila/ceph.client.manila.keyring'}) 2025-05-14 02:41:40.738213 | orchestrator | 2025-05-14 02:41:40.738229 | orchestrator | TASK [Copy ceph custom keys to the configuration repository] ******************* 2025-05-14 02:41:40.738243 | orchestrator | Wednesday 14 May 2025 02:41:07 +0000 (0:00:02.809) 0:00:40.752 ********* 2025-05-14 02:41:40.738255 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:41:40.738267 | orchestrator | 2025-05-14 02:41:40.738278 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:41:40.738291 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 02:41:40.738303 | orchestrator | 2025-05-14 02:41:40.738314 | orchestrator | Wednesday 14 May 2025 02:41:07 +0000 (0:00:00.040) 0:00:40.792 ********* 2025-05-14 02:41:40.738325 | orchestrator | =============================================================================== 2025-05-14 02:41:40.738335 | orchestrator | Fetch ceph keys from the first monitor node ---------------------------- 33.87s 2025-05-14 02:41:40.738346 | orchestrator | Check ceph keys --------------------------------------------------------- 3.01s 2025-05-14 02:41:40.738356 | orchestrator | Copy ceph kolla keys to the configuration repository -------------------- 2.81s 2025-05-14 02:41:40.738367 | orchestrator | Copy ceph infrastructure keys to the configuration repository ----------- 0.52s 2025-05-14 02:41:40.738377 | orchestrator | Set _fetch_ceph_keys fact ----------------------------------------------- 0.24s 2025-05-14 02:41:40.738388 | orchestrator | Point out that the following task takes some time and does not give any output --- 0.16s 2025-05-14 02:41:40.738400 | orchestrator | Copy ceph custom keys to the configuration repository ------------------- 0.04s 2025-05-14 02:41:40.738415 | orchestrator | 2025-05-14 02:41:40.738434 | orchestrator | 2025-05-14 02:41:40 | INFO  | Task 33019746-b9ef-495b-9f80-b37c5852f4dc is in state SUCCESS 2025-05-14 02:41:40.738660 | orchestrator | 2025-05-14 02:41:40 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:41:40.738676 | orchestrator | 2025-05-14 02:41:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:41:43.771290 | orchestrator | 2025-05-14 02:41:43 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:41:43.771385 | orchestrator | 2025-05-14 02:41:43 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:41:43.771401 | orchestrator | 2025-05-14 02:41:43 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:41:43.771413 | orchestrator | 2025-05-14 02:41:43 | INFO  | Task 43992d32-c4ee-421a-8784-d3f56ded7c6b is in state STARTED 2025-05-14 02:41:43.771424 | orchestrator | 2025-05-14 02:41:43 | INFO  | Task 3ba06c69-1da8-4d1b-ac2c-fe16828c7e54 is in state STARTED 2025-05-14 02:41:43.771436 | orchestrator | 2025-05-14 02:41:43 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:41:43.771447 | orchestrator | 2025-05-14 02:41:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:41:46.821623 | orchestrator | 2025-05-14 02:41:46 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:41:46.821735 | orchestrator | 2025-05-14 02:41:46 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:41:46.822476 | orchestrator | 2025-05-14 02:41:46 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:41:46.823306 | orchestrator | 2025-05-14 02:41:46 | INFO  | Task 43992d32-c4ee-421a-8784-d3f56ded7c6b is in state STARTED 2025-05-14 02:41:46.824195 | orchestrator | 2025-05-14 02:41:46 | INFO  | Task 3ba06c69-1da8-4d1b-ac2c-fe16828c7e54 is in state STARTED 2025-05-14 02:41:46.824910 | orchestrator | 2025-05-14 02:41:46 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:41:46.825063 | orchestrator | 2025-05-14 02:41:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:41:49.864984 | orchestrator | 2025-05-14 02:41:49 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:41:49.865708 | orchestrator | 2025-05-14 02:41:49 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:41:49.868375 | orchestrator | 2025-05-14 02:41:49 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:41:49.868751 | orchestrator | 2025-05-14 02:41:49 | INFO  | Task 43992d32-c4ee-421a-8784-d3f56ded7c6b is in state STARTED 2025-05-14 02:41:49.869237 | orchestrator | 2025-05-14 02:41:49 | INFO  | Task 3ba06c69-1da8-4d1b-ac2c-fe16828c7e54 is in state STARTED 2025-05-14 02:41:49.869798 | orchestrator | 2025-05-14 02:41:49 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:41:49.869828 | orchestrator | 2025-05-14 02:41:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:41:52.903959 | orchestrator | 2025-05-14 02:41:52 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:41:52.904064 | orchestrator | 2025-05-14 02:41:52 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:41:52.904382 | orchestrator | 2025-05-14 02:41:52 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:41:52.905070 | orchestrator | 2025-05-14 02:41:52 | INFO  | Task 43992d32-c4ee-421a-8784-d3f56ded7c6b is in state STARTED 2025-05-14 02:41:52.905501 | orchestrator | 2025-05-14 02:41:52 | INFO  | Task 3ba06c69-1da8-4d1b-ac2c-fe16828c7e54 is in state STARTED 2025-05-14 02:41:52.906088 | orchestrator | 2025-05-14 02:41:52 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:41:52.906111 | orchestrator | 2025-05-14 02:41:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:41:55.958261 | orchestrator | 2025-05-14 02:41:55 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:41:55.958382 | orchestrator | 2025-05-14 02:41:55 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:41:55.958407 | orchestrator | 2025-05-14 02:41:55 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:41:55.967219 | orchestrator | 2025-05-14 02:41:55 | INFO  | Task 43992d32-c4ee-421a-8784-d3f56ded7c6b is in state STARTED 2025-05-14 02:41:55.967322 | orchestrator | 2025-05-14 02:41:55 | INFO  | Task 3ba06c69-1da8-4d1b-ac2c-fe16828c7e54 is in state STARTED 2025-05-14 02:41:55.967345 | orchestrator | 2025-05-14 02:41:55 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:41:55.967364 | orchestrator | 2025-05-14 02:41:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:41:59.001495 | orchestrator | 2025-05-14 02:41:58 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:41:59.001682 | orchestrator | 2025-05-14 02:41:59 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:41:59.004942 | orchestrator | 2025-05-14 02:41:59 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:41:59.005056 | orchestrator | 2025-05-14 02:41:59 | INFO  | Task 43992d32-c4ee-421a-8784-d3f56ded7c6b is in state STARTED 2025-05-14 02:41:59.006823 | orchestrator | 2025-05-14 02:41:59 | INFO  | Task 3ba06c69-1da8-4d1b-ac2c-fe16828c7e54 is in state STARTED 2025-05-14 02:41:59.009370 | orchestrator | 2025-05-14 02:41:59 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:41:59.009679 | orchestrator | 2025-05-14 02:41:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:42:02.055255 | orchestrator | 2025-05-14 02:42:02 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:42:02.055339 | orchestrator | 2025-05-14 02:42:02 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:42:02.057956 | orchestrator | 2025-05-14 02:42:02 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:42:02.058076 | orchestrator | 2025-05-14 02:42:02 | INFO  | Task 43992d32-c4ee-421a-8784-d3f56ded7c6b is in state STARTED 2025-05-14 02:42:02.059836 | orchestrator | 2025-05-14 02:42:02 | INFO  | Task 3ba06c69-1da8-4d1b-ac2c-fe16828c7e54 is in state STARTED 2025-05-14 02:42:02.059871 | orchestrator | 2025-05-14 02:42:02 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:42:02.059881 | orchestrator | 2025-05-14 02:42:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:42:05.094098 | orchestrator | 2025-05-14 02:42:05 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:42:05.094205 | orchestrator | 2025-05-14 02:42:05 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:42:05.094676 | orchestrator | 2025-05-14 02:42:05 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:42:05.096439 | orchestrator | 2025-05-14 02:42:05 | INFO  | Task 4c9bf474-a068-4633-885e-7ffc376d660a is in state STARTED 2025-05-14 02:42:05.096951 | orchestrator | 2025-05-14 02:42:05 | INFO  | Task 43992d32-c4ee-421a-8784-d3f56ded7c6b is in state STARTED 2025-05-14 02:42:05.097796 | orchestrator | 2025-05-14 02:42:05 | INFO  | Task 3ba06c69-1da8-4d1b-ac2c-fe16828c7e54 is in state SUCCESS 2025-05-14 02:42:05.098454 | orchestrator | 2025-05-14 02:42:05 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:42:05.098484 | orchestrator | 2025-05-14 02:42:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:42:08.132168 | orchestrator | 2025-05-14 02:42:08 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:42:08.132268 | orchestrator | 2025-05-14 02:42:08 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:42:08.132670 | orchestrator | 2025-05-14 02:42:08 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:42:08.133328 | orchestrator | 2025-05-14 02:42:08 | INFO  | Task 4c9bf474-a068-4633-885e-7ffc376d660a is in state STARTED 2025-05-14 02:42:08.133995 | orchestrator | 2025-05-14 02:42:08 | INFO  | Task 43992d32-c4ee-421a-8784-d3f56ded7c6b is in state STARTED 2025-05-14 02:42:08.134463 | orchestrator | 2025-05-14 02:42:08 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:42:08.134590 | orchestrator | 2025-05-14 02:42:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:42:11.167243 | orchestrator | 2025-05-14 02:42:11 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:42:11.167351 | orchestrator | 2025-05-14 02:42:11 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:42:11.167916 | orchestrator | 2025-05-14 02:42:11 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:42:11.169809 | orchestrator | 2025-05-14 02:42:11 | INFO  | Task 4c9bf474-a068-4633-885e-7ffc376d660a is in state STARTED 2025-05-14 02:42:11.170148 | orchestrator | 2025-05-14 02:42:11 | INFO  | Task 43992d32-c4ee-421a-8784-d3f56ded7c6b is in state STARTED 2025-05-14 02:42:11.173626 | orchestrator | 2025-05-14 02:42:11 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:42:11.173730 | orchestrator | 2025-05-14 02:42:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:42:14.217854 | orchestrator | 2025-05-14 02:42:14 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:42:14.217980 | orchestrator | 2025-05-14 02:42:14 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:42:14.218817 | orchestrator | 2025-05-14 02:42:14 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:42:14.219445 | orchestrator | 2025-05-14 02:42:14 | INFO  | Task 4c9bf474-a068-4633-885e-7ffc376d660a is in state STARTED 2025-05-14 02:42:14.220790 | orchestrator | 2025-05-14 02:42:14 | INFO  | Task 43992d32-c4ee-421a-8784-d3f56ded7c6b is in state STARTED 2025-05-14 02:42:14.221601 | orchestrator | 2025-05-14 02:42:14 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:42:14.221630 | orchestrator | 2025-05-14 02:42:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:42:17.251658 | orchestrator | 2025-05-14 02:42:17 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:42:17.251797 | orchestrator | 2025-05-14 02:42:17 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:42:17.251826 | orchestrator | 2025-05-14 02:42:17 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:42:17.252321 | orchestrator | 2025-05-14 02:42:17 | INFO  | Task 4c9bf474-a068-4633-885e-7ffc376d660a is in state STARTED 2025-05-14 02:42:17.253107 | orchestrator | 2025-05-14 02:42:17 | INFO  | Task 43992d32-c4ee-421a-8784-d3f56ded7c6b is in state STARTED 2025-05-14 02:42:17.253657 | orchestrator | 2025-05-14 02:42:17 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:42:17.253688 | orchestrator | 2025-05-14 02:42:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:42:20.291451 | orchestrator | 2025-05-14 02:42:20 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:42:20.292764 | orchestrator | 2025-05-14 02:42:20 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:42:20.295296 | orchestrator | 2025-05-14 02:42:20 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:42:20.297416 | orchestrator | 2025-05-14 02:42:20 | INFO  | Task 4c9bf474-a068-4633-885e-7ffc376d660a is in state STARTED 2025-05-14 02:42:20.298747 | orchestrator | 2025-05-14 02:42:20 | INFO  | Task 43992d32-c4ee-421a-8784-d3f56ded7c6b is in state STARTED 2025-05-14 02:42:20.301731 | orchestrator | 2025-05-14 02:42:20 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:42:20.301773 | orchestrator | 2025-05-14 02:42:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:42:23.331937 | orchestrator | 2025-05-14 02:42:23 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:42:23.332192 | orchestrator | 2025-05-14 02:42:23 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:42:23.333085 | orchestrator | 2025-05-14 02:42:23 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:42:23.333775 | orchestrator | 2025-05-14 02:42:23 | INFO  | Task 4c9bf474-a068-4633-885e-7ffc376d660a is in state STARTED 2025-05-14 02:42:23.334528 | orchestrator | 2025-05-14 02:42:23 | INFO  | Task 43992d32-c4ee-421a-8784-d3f56ded7c6b is in state STARTED 2025-05-14 02:42:23.335200 | orchestrator | 2025-05-14 02:42:23 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:42:23.335267 | orchestrator | 2025-05-14 02:42:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:42:26.371875 | orchestrator | 2025-05-14 02:42:26 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:42:26.372068 | orchestrator | 2025-05-14 02:42:26 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:42:26.373615 | orchestrator | 2025-05-14 02:42:26 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:42:26.374664 | orchestrator | 2025-05-14 02:42:26 | INFO  | Task 4c9bf474-a068-4633-885e-7ffc376d660a is in state STARTED 2025-05-14 02:42:26.380033 | orchestrator | 2025-05-14 02:42:26 | INFO  | Task 43992d32-c4ee-421a-8784-d3f56ded7c6b is in state STARTED 2025-05-14 02:42:26.381708 | orchestrator | 2025-05-14 02:42:26 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:42:26.381761 | orchestrator | 2025-05-14 02:42:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:42:29.414362 | orchestrator | 2025-05-14 02:42:29 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:42:29.417801 | orchestrator | 2025-05-14 02:42:29 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:42:29.418475 | orchestrator | 2025-05-14 02:42:29 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:42:29.420042 | orchestrator | 2025-05-14 02:42:29 | INFO  | Task 4c9bf474-a068-4633-885e-7ffc376d660a is in state STARTED 2025-05-14 02:42:29.422012 | orchestrator | 2025-05-14 02:42:29 | INFO  | Task 43992d32-c4ee-421a-8784-d3f56ded7c6b is in state STARTED 2025-05-14 02:42:29.422649 | orchestrator | 2025-05-14 02:42:29 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:42:29.422687 | orchestrator | 2025-05-14 02:42:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:42:32.445948 | orchestrator | 2025-05-14 02:42:32 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:42:32.446096 | orchestrator | 2025-05-14 02:42:32 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:42:32.446399 | orchestrator | 2025-05-14 02:42:32 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:42:32.448382 | orchestrator | 2025-05-14 02:42:32 | INFO  | Task 4c9bf474-a068-4633-885e-7ffc376d660a is in state STARTED 2025-05-14 02:42:32.449336 | orchestrator | 2025-05-14 02:42:32 | INFO  | Task 43992d32-c4ee-421a-8784-d3f56ded7c6b is in state STARTED 2025-05-14 02:42:32.450721 | orchestrator | 2025-05-14 02:42:32 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:42:32.450757 | orchestrator | 2025-05-14 02:42:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:42:35.486209 | orchestrator | 2025-05-14 02:42:35 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:42:35.486328 | orchestrator | 2025-05-14 02:42:35 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:42:35.487010 | orchestrator | 2025-05-14 02:42:35 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:42:35.487719 | orchestrator | 2025-05-14 02:42:35 | INFO  | Task 4c9bf474-a068-4633-885e-7ffc376d660a is in state STARTED 2025-05-14 02:42:35.488341 | orchestrator | 2025-05-14 02:42:35 | INFO  | Task 43992d32-c4ee-421a-8784-d3f56ded7c6b is in state STARTED 2025-05-14 02:42:35.488956 | orchestrator | 2025-05-14 02:42:35 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:42:35.489002 | orchestrator | 2025-05-14 02:42:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:42:38.513273 | orchestrator | 2025-05-14 02:42:38 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:42:38.514894 | orchestrator | 2025-05-14 02:42:38 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:42:38.515436 | orchestrator | 2025-05-14 02:42:38 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:42:38.516202 | orchestrator | 2025-05-14 02:42:38 | INFO  | Task 4c9bf474-a068-4633-885e-7ffc376d660a is in state STARTED 2025-05-14 02:42:38.516640 | orchestrator | 2025-05-14 02:42:38 | INFO  | Task 43992d32-c4ee-421a-8784-d3f56ded7c6b is in state STARTED 2025-05-14 02:42:38.517078 | orchestrator | 2025-05-14 02:42:38 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:42:38.517170 | orchestrator | 2025-05-14 02:42:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:42:41.535821 | orchestrator | 2025-05-14 02:42:41 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:42:41.535932 | orchestrator | 2025-05-14 02:42:41 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:42:41.536337 | orchestrator | 2025-05-14 02:42:41 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:42:41.536765 | orchestrator | 2025-05-14 02:42:41 | INFO  | Task 4c9bf474-a068-4633-885e-7ffc376d660a is in state SUCCESS 2025-05-14 02:42:41.537182 | orchestrator | 2025-05-14 02:42:41.537208 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-05-14 02:42:41.537219 | orchestrator | 2025-05-14 02:42:41.537231 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-05-14 02:42:41.537242 | orchestrator | Wednesday 14 May 2025 02:41:04 +0000 (0:00:00.195) 0:00:00.195 ********* 2025-05-14 02:42:41.537254 | orchestrator | changed: [localhost] 2025-05-14 02:42:41.537265 | orchestrator | 2025-05-14 02:42:41.537276 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-05-14 02:42:41.537287 | orchestrator | Wednesday 14 May 2025 02:41:04 +0000 (0:00:00.744) 0:00:00.940 ********* 2025-05-14 02:42:41.537298 | orchestrator | changed: [localhost] 2025-05-14 02:42:41.537309 | orchestrator | 2025-05-14 02:42:41.537319 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-05-14 02:42:41.537330 | orchestrator | Wednesday 14 May 2025 02:41:34 +0000 (0:00:29.532) 0:00:30.472 ********* 2025-05-14 02:42:41.537341 | orchestrator | changed: [localhost] 2025-05-14 02:42:41.537351 | orchestrator | 2025-05-14 02:42:41.537362 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:42:41.537374 | orchestrator | 2025-05-14 02:42:41.537385 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:42:41.537396 | orchestrator | Wednesday 14 May 2025 02:41:38 +0000 (0:00:03.742) 0:00:34.215 ********* 2025-05-14 02:42:41.537407 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:42:41.537418 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:42:41.537428 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:42:41.537439 | orchestrator | 2025-05-14 02:42:41.537450 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:42:41.537461 | orchestrator | Wednesday 14 May 2025 02:41:38 +0000 (0:00:00.341) 0:00:34.556 ********* 2025-05-14 02:42:41.537471 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-05-14 02:42:41.537514 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-05-14 02:42:41.537529 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-05-14 02:42:41.537545 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-05-14 02:42:41.537600 | orchestrator | 2025-05-14 02:42:41.537620 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-05-14 02:42:41.537639 | orchestrator | skipping: no hosts matched 2025-05-14 02:42:41.537658 | orchestrator | 2025-05-14 02:42:41.537777 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:42:41.537794 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:42:41.537807 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:42:41.537820 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:42:41.537831 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:42:41.537842 | orchestrator | 2025-05-14 02:42:41.537854 | orchestrator | 2025-05-14 02:42:41.537865 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:42:41.537876 | orchestrator | Wednesday 14 May 2025 02:41:38 +0000 (0:00:00.348) 0:00:34.905 ********* 2025-05-14 02:42:41.537888 | orchestrator | =============================================================================== 2025-05-14 02:42:41.537899 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 29.53s 2025-05-14 02:42:41.537910 | orchestrator | Download ironic-agent kernel -------------------------------------------- 3.74s 2025-05-14 02:42:41.537937 | orchestrator | Ensure the destination directory exists --------------------------------- 0.74s 2025-05-14 02:42:41.537949 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.35s 2025-05-14 02:42:41.537960 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2025-05-14 02:42:41.537971 | orchestrator | 2025-05-14 02:42:41.537982 | orchestrator | 2025-05-14 02:42:41.537993 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-05-14 02:42:41.538004 | orchestrator | 2025-05-14 02:42:41.538068 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-05-14 02:42:41.538082 | orchestrator | Wednesday 14 May 2025 02:41:11 +0000 (0:00:00.166) 0:00:00.166 ********* 2025-05-14 02:42:41.538093 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-05-14 02:42:41.538103 | orchestrator | 2025-05-14 02:42:41.538114 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-05-14 02:42:41.538124 | orchestrator | Wednesday 14 May 2025 02:41:11 +0000 (0:00:00.219) 0:00:00.385 ********* 2025-05-14 02:42:41.538135 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-05-14 02:42:41.538146 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-05-14 02:42:41.538234 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-05-14 02:42:41.538248 | orchestrator | 2025-05-14 02:42:41.538258 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-05-14 02:42:41.538269 | orchestrator | Wednesday 14 May 2025 02:41:12 +0000 (0:00:01.082) 0:00:01.468 ********* 2025-05-14 02:42:41.538280 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-05-14 02:42:41.538359 | orchestrator | 2025-05-14 02:42:41.538371 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-05-14 02:42:41.538381 | orchestrator | Wednesday 14 May 2025 02:41:13 +0000 (0:00:01.076) 0:00:02.544 ********* 2025-05-14 02:42:41.538404 | orchestrator | changed: [testbed-manager] 2025-05-14 02:42:41.538415 | orchestrator | 2025-05-14 02:42:41.538426 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-05-14 02:42:41.538437 | orchestrator | Wednesday 14 May 2025 02:41:14 +0000 (0:00:00.852) 0:00:03.397 ********* 2025-05-14 02:42:41.538448 | orchestrator | changed: [testbed-manager] 2025-05-14 02:42:41.538470 | orchestrator | 2025-05-14 02:42:41.538481 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-05-14 02:42:41.538519 | orchestrator | Wednesday 14 May 2025 02:41:15 +0000 (0:00:00.874) 0:00:04.272 ********* 2025-05-14 02:42:41.538530 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-05-14 02:42:41.538548 | orchestrator | ok: [testbed-manager] 2025-05-14 02:42:41.538567 | orchestrator | 2025-05-14 02:42:41.538585 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-05-14 02:42:41.538603 | orchestrator | Wednesday 14 May 2025 02:41:53 +0000 (0:00:38.439) 0:00:42.711 ********* 2025-05-14 02:42:41.538620 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-05-14 02:42:41.538640 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-05-14 02:42:41.538659 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-05-14 02:42:41.538677 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-05-14 02:42:41.538693 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-05-14 02:42:41.538704 | orchestrator | 2025-05-14 02:42:41.538714 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-05-14 02:42:41.538725 | orchestrator | Wednesday 14 May 2025 02:41:57 +0000 (0:00:03.515) 0:00:46.227 ********* 2025-05-14 02:42:41.538735 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-05-14 02:42:41.538746 | orchestrator | 2025-05-14 02:42:41.538756 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-05-14 02:42:41.538767 | orchestrator | Wednesday 14 May 2025 02:41:57 +0000 (0:00:00.357) 0:00:46.584 ********* 2025-05-14 02:42:41.538778 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:42:41.538788 | orchestrator | 2025-05-14 02:42:41.538799 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-05-14 02:42:41.538809 | orchestrator | Wednesday 14 May 2025 02:41:57 +0000 (0:00:00.103) 0:00:46.688 ********* 2025-05-14 02:42:41.538820 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:42:41.538830 | orchestrator | 2025-05-14 02:42:41.538841 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-05-14 02:42:41.538852 | orchestrator | Wednesday 14 May 2025 02:41:57 +0000 (0:00:00.258) 0:00:46.947 ********* 2025-05-14 02:42:41.538862 | orchestrator | changed: [testbed-manager] 2025-05-14 02:42:41.538873 | orchestrator | 2025-05-14 02:42:41.538884 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-05-14 02:42:41.538894 | orchestrator | Wednesday 14 May 2025 02:41:59 +0000 (0:00:01.253) 0:00:48.201 ********* 2025-05-14 02:42:41.538905 | orchestrator | changed: [testbed-manager] 2025-05-14 02:42:41.538915 | orchestrator | 2025-05-14 02:42:41.538926 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-05-14 02:42:41.538937 | orchestrator | Wednesday 14 May 2025 02:42:00 +0000 (0:00:00.942) 0:00:49.143 ********* 2025-05-14 02:42:41.538947 | orchestrator | changed: [testbed-manager] 2025-05-14 02:42:41.538958 | orchestrator | 2025-05-14 02:42:41.538968 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-05-14 02:42:41.538979 | orchestrator | Wednesday 14 May 2025 02:42:00 +0000 (0:00:00.672) 0:00:49.815 ********* 2025-05-14 02:42:41.538991 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-05-14 02:42:41.539004 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-05-14 02:42:41.539016 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-05-14 02:42:41.539029 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-05-14 02:42:41.539041 | orchestrator | 2025-05-14 02:42:41.539053 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:42:41.539074 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-14 02:42:41.539087 | orchestrator | 2025-05-14 02:42:41.539100 | orchestrator | Wednesday 14 May 2025 02:42:02 +0000 (0:00:01.569) 0:00:51.385 ********* 2025-05-14 02:42:41.539123 | orchestrator | =============================================================================== 2025-05-14 02:42:41.539136 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 38.44s 2025-05-14 02:42:41.539149 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.52s 2025-05-14 02:42:41.539161 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.57s 2025-05-14 02:42:41.539174 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.25s 2025-05-14 02:42:41.539187 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.08s 2025-05-14 02:42:41.539199 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.08s 2025-05-14 02:42:41.539211 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.94s 2025-05-14 02:42:41.539223 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.87s 2025-05-14 02:42:41.539235 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.85s 2025-05-14 02:42:41.539248 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.67s 2025-05-14 02:42:41.539260 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.36s 2025-05-14 02:42:41.539273 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.26s 2025-05-14 02:42:41.539284 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.22s 2025-05-14 02:42:41.539296 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.10s 2025-05-14 02:42:41.539308 | orchestrator | 2025-05-14 02:42:41.539331 | orchestrator | 2025-05-14 02:42:41 | INFO  | Task 43992d32-c4ee-421a-8784-d3f56ded7c6b is in state STARTED 2025-05-14 02:42:41.539345 | orchestrator | 2025-05-14 02:42:41 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:42:41.539356 | orchestrator | 2025-05-14 02:42:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:42:44.559634 | orchestrator | 2025-05-14 02:42:44 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:42:44.560082 | orchestrator | 2025-05-14 02:42:44 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:42:44.560752 | orchestrator | 2025-05-14 02:42:44 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:42:44.561979 | orchestrator | 2025-05-14 02:42:44 | INFO  | Task 43992d32-c4ee-421a-8784-d3f56ded7c6b is in state STARTED 2025-05-14 02:42:44.567305 | orchestrator | 2025-05-14 02:42:44 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:42:44.567380 | orchestrator | 2025-05-14 02:42:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:42:47.596857 | orchestrator | 2025-05-14 02:42:47 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:42:47.596966 | orchestrator | 2025-05-14 02:42:47 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:42:47.597794 | orchestrator | 2025-05-14 02:42:47 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:42:47.597961 | orchestrator | 2025-05-14 02:42:47 | INFO  | Task 43992d32-c4ee-421a-8784-d3f56ded7c6b is in state STARTED 2025-05-14 02:42:47.598979 | orchestrator | 2025-05-14 02:42:47 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:42:47.599091 | orchestrator | 2025-05-14 02:42:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:42:50.635829 | orchestrator | 2025-05-14 02:42:50 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:42:50.636782 | orchestrator | 2025-05-14 02:42:50 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:42:50.638616 | orchestrator | 2025-05-14 02:42:50 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:42:50.639381 | orchestrator | 2025-05-14 02:42:50 | INFO  | Task 43992d32-c4ee-421a-8784-d3f56ded7c6b is in state STARTED 2025-05-14 02:42:50.641250 | orchestrator | 2025-05-14 02:42:50 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:42:50.641310 | orchestrator | 2025-05-14 02:42:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:42:53.680738 | orchestrator | 2025-05-14 02:42:53 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:42:53.682755 | orchestrator | 2025-05-14 02:42:53 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:42:53.682843 | orchestrator | 2025-05-14 02:42:53 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:42:53.682865 | orchestrator | 2025-05-14 02:42:53 | INFO  | Task 43992d32-c4ee-421a-8784-d3f56ded7c6b is in state STARTED 2025-05-14 02:42:53.682895 | orchestrator | 2025-05-14 02:42:53 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:42:53.682912 | orchestrator | 2025-05-14 02:42:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:42:56.719590 | orchestrator | 2025-05-14 02:42:56 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:42:56.720113 | orchestrator | 2025-05-14 02:42:56 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:42:56.721346 | orchestrator | 2025-05-14 02:42:56 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:42:56.722642 | orchestrator | 2025-05-14 02:42:56 | INFO  | Task 43992d32-c4ee-421a-8784-d3f56ded7c6b is in state STARTED 2025-05-14 02:42:56.725120 | orchestrator | 2025-05-14 02:42:56 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:42:56.725205 | orchestrator | 2025-05-14 02:42:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:42:59.767933 | orchestrator | 2025-05-14 02:42:59 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:42:59.769873 | orchestrator | 2025-05-14 02:42:59 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:42:59.769917 | orchestrator | 2025-05-14 02:42:59 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:42:59.770567 | orchestrator | 2025-05-14 02:42:59 | INFO  | Task 43992d32-c4ee-421a-8784-d3f56ded7c6b is in state STARTED 2025-05-14 02:42:59.771037 | orchestrator | 2025-05-14 02:42:59 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:42:59.772173 | orchestrator | 2025-05-14 02:42:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:43:02.799156 | orchestrator | 2025-05-14 02:43:02 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:43:02.799363 | orchestrator | 2025-05-14 02:43:02 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:43:02.800267 | orchestrator | 2025-05-14 02:43:02 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:43:02.800829 | orchestrator | 2025-05-14 02:43:02 | INFO  | Task 43992d32-c4ee-421a-8784-d3f56ded7c6b is in state STARTED 2025-05-14 02:43:02.801772 | orchestrator | 2025-05-14 02:43:02 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:43:02.801802 | orchestrator | 2025-05-14 02:43:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:43:05.838091 | orchestrator | 2025-05-14 02:43:05 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:43:05.841406 | orchestrator | 2025-05-14 02:43:05 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:43:05.842391 | orchestrator | 2025-05-14 02:43:05 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:43:05.843088 | orchestrator | 2025-05-14 02:43:05 | INFO  | Task 43992d32-c4ee-421a-8784-d3f56ded7c6b is in state STARTED 2025-05-14 02:43:05.845168 | orchestrator | 2025-05-14 02:43:05 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:43:05.845257 | orchestrator | 2025-05-14 02:43:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:43:08.886662 | orchestrator | 2025-05-14 02:43:08 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:43:08.887061 | orchestrator | 2025-05-14 02:43:08 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:43:08.888002 | orchestrator | 2025-05-14 02:43:08 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:43:08.889024 | orchestrator | 2025-05-14 02:43:08 | INFO  | Task 43992d32-c4ee-421a-8784-d3f56ded7c6b is in state SUCCESS 2025-05-14 02:43:08.889457 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-14 02:43:08.889493 | orchestrator | 2025-05-14 02:43:08.889505 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-05-14 02:43:08.889515 | orchestrator | 2025-05-14 02:43:08.889521 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-05-14 02:43:08.889528 | orchestrator | Wednesday 14 May 2025 02:42:06 +0000 (0:00:00.330) 0:00:00.330 ********* 2025-05-14 02:43:08.889549 | orchestrator | changed: [testbed-manager] 2025-05-14 02:43:08.889558 | orchestrator | 2025-05-14 02:43:08.889568 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-05-14 02:43:08.889577 | orchestrator | Wednesday 14 May 2025 02:42:07 +0000 (0:00:01.093) 0:00:01.424 ********* 2025-05-14 02:43:08.889585 | orchestrator | changed: [testbed-manager] 2025-05-14 02:43:08.889594 | orchestrator | 2025-05-14 02:43:08.889603 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-05-14 02:43:08.889613 | orchestrator | Wednesday 14 May 2025 02:42:08 +0000 (0:00:01.069) 0:00:02.494 ********* 2025-05-14 02:43:08.889622 | orchestrator | changed: [testbed-manager] 2025-05-14 02:43:08.889632 | orchestrator | 2025-05-14 02:43:08.889686 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-05-14 02:43:08.889693 | orchestrator | Wednesday 14 May 2025 02:42:09 +0000 (0:00:00.896) 0:00:03.390 ********* 2025-05-14 02:43:08.889699 | orchestrator | changed: [testbed-manager] 2025-05-14 02:43:08.889705 | orchestrator | 2025-05-14 02:43:08.889710 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-05-14 02:43:08.889716 | orchestrator | Wednesday 14 May 2025 02:42:10 +0000 (0:00:00.908) 0:00:04.298 ********* 2025-05-14 02:43:08.889722 | orchestrator | changed: [testbed-manager] 2025-05-14 02:43:08.889728 | orchestrator | 2025-05-14 02:43:08.889734 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-05-14 02:43:08.889740 | orchestrator | Wednesday 14 May 2025 02:42:11 +0000 (0:00:01.001) 0:00:05.299 ********* 2025-05-14 02:43:08.889746 | orchestrator | changed: [testbed-manager] 2025-05-14 02:43:08.889751 | orchestrator | 2025-05-14 02:43:08.889777 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-05-14 02:43:08.889784 | orchestrator | Wednesday 14 May 2025 02:42:12 +0000 (0:00:01.011) 0:00:06.311 ********* 2025-05-14 02:43:08.889790 | orchestrator | changed: [testbed-manager] 2025-05-14 02:43:08.889795 | orchestrator | 2025-05-14 02:43:08.889801 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-05-14 02:43:08.889824 | orchestrator | Wednesday 14 May 2025 02:42:13 +0000 (0:00:01.079) 0:00:07.391 ********* 2025-05-14 02:43:08.889830 | orchestrator | changed: [testbed-manager] 2025-05-14 02:43:08.889836 | orchestrator | 2025-05-14 02:43:08.889873 | orchestrator | TASK [Create admin user] ******************************************************* 2025-05-14 02:43:08.889880 | orchestrator | Wednesday 14 May 2025 02:42:14 +0000 (0:00:01.192) 0:00:08.583 ********* 2025-05-14 02:43:08.889885 | orchestrator | changed: [testbed-manager] 2025-05-14 02:43:08.889891 | orchestrator | 2025-05-14 02:43:08.889897 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-05-14 02:43:08.889903 | orchestrator | Wednesday 14 May 2025 02:42:32 +0000 (0:00:18.140) 0:00:26.724 ********* 2025-05-14 02:43:08.889908 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:43:08.889914 | orchestrator | 2025-05-14 02:43:08.889921 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-14 02:43:08.889926 | orchestrator | 2025-05-14 02:43:08.889932 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-14 02:43:08.889938 | orchestrator | Wednesday 14 May 2025 02:42:32 +0000 (0:00:00.503) 0:00:27.227 ********* 2025-05-14 02:43:08.889943 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:43:08.889949 | orchestrator | 2025-05-14 02:43:08.889955 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-14 02:43:08.889961 | orchestrator | 2025-05-14 02:43:08.889966 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-14 02:43:08.889972 | orchestrator | Wednesday 14 May 2025 02:42:35 +0000 (0:00:02.129) 0:00:29.356 ********* 2025-05-14 02:43:08.889978 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:43:08.889983 | orchestrator | 2025-05-14 02:43:08.889989 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-14 02:43:08.889995 | orchestrator | 2025-05-14 02:43:08.890000 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-14 02:43:08.890006 | orchestrator | Wednesday 14 May 2025 02:42:36 +0000 (0:00:01.753) 0:00:31.110 ********* 2025-05-14 02:43:08.890012 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:43:08.890059 | orchestrator | 2025-05-14 02:43:08.890065 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:43:08.890073 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-14 02:43:08.890080 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:43:08.890086 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:43:08.890093 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:43:08.890100 | orchestrator | 2025-05-14 02:43:08.890109 | orchestrator | 2025-05-14 02:43:08.890115 | orchestrator | 2025-05-14 02:43:08.890122 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:43:08.890129 | orchestrator | Wednesday 14 May 2025 02:42:38 +0000 (0:00:01.430) 0:00:32.541 ********* 2025-05-14 02:43:08.890136 | orchestrator | =============================================================================== 2025-05-14 02:43:08.890143 | orchestrator | Create admin user ------------------------------------------------------ 18.14s 2025-05-14 02:43:08.890159 | orchestrator | Restart ceph manager service -------------------------------------------- 5.31s 2025-05-14 02:43:08.890167 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.19s 2025-05-14 02:43:08.890175 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.09s 2025-05-14 02:43:08.890182 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.08s 2025-05-14 02:43:08.890189 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.07s 2025-05-14 02:43:08.890228 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.01s 2025-05-14 02:43:08.890236 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.00s 2025-05-14 02:43:08.890242 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 0.91s 2025-05-14 02:43:08.890283 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.90s 2025-05-14 02:43:08.890290 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.50s 2025-05-14 02:43:08.890297 | orchestrator | 2025-05-14 02:43:08.890683 | orchestrator | 2025-05-14 02:43:08.890824 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:43:08.890847 | orchestrator | 2025-05-14 02:43:08.890863 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:43:08.890878 | orchestrator | Wednesday 14 May 2025 02:41:43 +0000 (0:00:00.616) 0:00:00.616 ********* 2025-05-14 02:43:08.890894 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:43:08.890911 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:43:08.890925 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:43:08.890939 | orchestrator | 2025-05-14 02:43:08.890954 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:43:08.890963 | orchestrator | Wednesday 14 May 2025 02:41:43 +0000 (0:00:00.659) 0:00:01.275 ********* 2025-05-14 02:43:08.890973 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-05-14 02:43:08.890983 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-05-14 02:43:08.890991 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-05-14 02:43:08.891000 | orchestrator | 2025-05-14 02:43:08.891009 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-05-14 02:43:08.891018 | orchestrator | 2025-05-14 02:43:08.891027 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-14 02:43:08.891043 | orchestrator | Wednesday 14 May 2025 02:41:44 +0000 (0:00:00.513) 0:00:01.788 ********* 2025-05-14 02:43:08.891063 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:43:08.891084 | orchestrator | 2025-05-14 02:43:08.891103 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-05-14 02:43:08.891123 | orchestrator | Wednesday 14 May 2025 02:41:45 +0000 (0:00:00.696) 0:00:02.485 ********* 2025-05-14 02:43:08.891142 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-05-14 02:43:08.891163 | orchestrator | 2025-05-14 02:43:08.891194 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-05-14 02:43:08.891223 | orchestrator | Wednesday 14 May 2025 02:41:48 +0000 (0:00:03.341) 0:00:05.826 ********* 2025-05-14 02:43:08.891241 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-05-14 02:43:08.891261 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-05-14 02:43:08.891281 | orchestrator | 2025-05-14 02:43:08.891299 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-05-14 02:43:08.891312 | orchestrator | Wednesday 14 May 2025 02:41:54 +0000 (0:00:06.326) 0:00:12.153 ********* 2025-05-14 02:43:08.891326 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-14 02:43:08.891338 | orchestrator | 2025-05-14 02:43:08.891350 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-05-14 02:43:08.891363 | orchestrator | Wednesday 14 May 2025 02:41:59 +0000 (0:00:04.300) 0:00:16.453 ********* 2025-05-14 02:43:08.891375 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-14 02:43:08.891388 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-05-14 02:43:08.891400 | orchestrator | 2025-05-14 02:43:08.891413 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-05-14 02:43:08.891425 | orchestrator | Wednesday 14 May 2025 02:42:03 +0000 (0:00:04.282) 0:00:20.736 ********* 2025-05-14 02:43:08.891502 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-14 02:43:08.891516 | orchestrator | 2025-05-14 02:43:08.891530 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-05-14 02:43:08.891541 | orchestrator | Wednesday 14 May 2025 02:42:06 +0000 (0:00:03.550) 0:00:24.286 ********* 2025-05-14 02:43:08.891552 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-05-14 02:43:08.891563 | orchestrator | 2025-05-14 02:43:08.891574 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-14 02:43:08.891585 | orchestrator | Wednesday 14 May 2025 02:42:11 +0000 (0:00:05.023) 0:00:29.310 ********* 2025-05-14 02:43:08.891595 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:43:08.891606 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:43:08.891617 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:43:08.891628 | orchestrator | 2025-05-14 02:43:08.891639 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-05-14 02:43:08.891650 | orchestrator | Wednesday 14 May 2025 02:42:12 +0000 (0:00:00.861) 0:00:30.172 ********* 2025-05-14 02:43:08.891686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 02:43:08.891728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 02:43:08.891742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 02:43:08.891753 | orchestrator | 2025-05-14 02:43:08.891764 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-05-14 02:43:08.891784 | orchestrator | Wednesday 14 May 2025 02:42:14 +0000 (0:00:01.854) 0:00:32.026 ********* 2025-05-14 02:43:08.891795 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:43:08.891806 | orchestrator | 2025-05-14 02:43:08.891817 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-05-14 02:43:08.891828 | orchestrator | Wednesday 14 May 2025 02:42:14 +0000 (0:00:00.281) 0:00:32.308 ********* 2025-05-14 02:43:08.891839 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:43:08.891850 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:43:08.891860 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:43:08.891871 | orchestrator | 2025-05-14 02:43:08.891882 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-14 02:43:08.891892 | orchestrator | Wednesday 14 May 2025 02:42:15 +0000 (0:00:00.509) 0:00:32.817 ********* 2025-05-14 02:43:08.891903 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:43:08.891914 | orchestrator | 2025-05-14 02:43:08.891925 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-05-14 02:43:08.891937 | orchestrator | Wednesday 14 May 2025 02:42:17 +0000 (0:00:01.640) 0:00:34.458 ********* 2025-05-14 02:43:08.891949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 02:43:08.891977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 02:43:08.891990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 02:43:08.892009 | orchestrator | 2025-05-14 02:43:08.892020 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-05-14 02:43:08.892031 | orchestrator | Wednesday 14 May 2025 02:42:19 +0000 (0:00:02.016) 0:00:36.474 ********* 2025-05-14 02:43:08.892042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-14 02:43:08.892053 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:43:08.892065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-14 02:43:08.892076 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:43:08.892106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-14 02:43:08.892127 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:43:08.892147 | orchestrator | 2025-05-14 02:43:08.892167 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-05-14 02:43:08.892187 | orchestrator | Wednesday 14 May 2025 02:42:20 +0000 (0:00:01.110) 0:00:37.585 ********* 2025-05-14 02:43:08.892206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-14 02:43:08.892235 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:43:08.892247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-14 02:43:08.892258 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:43:08.892269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-14 02:43:08.892281 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:43:08.892295 | orchestrator | 2025-05-14 02:43:08.892315 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-05-14 02:43:08.892334 | orchestrator | Wednesday 14 May 2025 02:42:22 +0000 (0:00:01.900) 0:00:39.485 ********* 2025-05-14 02:43:08.892374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 02:43:08.892398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 02:43:08.892430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 02:43:08.892450 | orchestrator | 2025-05-14 02:43:08.892470 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-05-14 02:43:08.892527 | orchestrator | Wednesday 14 May 2025 02:42:24 +0000 (0:00:02.294) 0:00:41.780 ********* 2025-05-14 02:43:08.892539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 02:43:08.892558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 02:43:08.892580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 02:43:08.892606 | orchestrator | 2025-05-14 02:43:08.892618 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-05-14 02:43:08.892628 | orchestrator | Wednesday 14 May 2025 02:42:29 +0000 (0:00:05.335) 0:00:47.115 ********* 2025-05-14 02:43:08.892639 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-14 02:43:08.892650 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-14 02:43:08.892661 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-14 02:43:08.892672 | orchestrator | 2025-05-14 02:43:08.892683 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-05-14 02:43:08.892694 | orchestrator | Wednesday 14 May 2025 02:42:33 +0000 (0:00:03.382) 0:00:50.498 ********* 2025-05-14 02:43:08.892704 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:43:08.892715 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:43:08.892726 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:43:08.892736 | orchestrator | 2025-05-14 02:43:08.892747 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-05-14 02:43:08.892757 | orchestrator | Wednesday 14 May 2025 02:42:36 +0000 (0:00:03.366) 0:00:53.864 ********* 2025-05-14 02:43:08.892769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-14 02:43:08.892780 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:43:08.892791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-14 02:43:08.892808 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:43:08.892830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-14 02:43:08.892850 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:43:08.892861 | orchestrator | 2025-05-14 02:43:08.892872 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-05-14 02:43:08.892883 | orchestrator | Wednesday 14 May 2025 02:42:37 +0000 (0:00:01.170) 0:00:55.035 ********* 2025-05-14 02:43:08.892894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 02:43:08.892906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 02:43:08.892917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-14 02:43:08.892935 | orchestrator | 2025-05-14 02:43:08.892954 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-05-14 02:43:08.892970 | orchestrator | Wednesday 14 May 2025 02:42:39 +0000 (0:00:01.758) 0:00:56.793 ********* 2025-05-14 02:43:08.892998 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:43:08.893031 | orchestrator | 2025-05-14 02:43:08.893050 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-05-14 02:43:08.893068 | orchestrator | Wednesday 14 May 2025 02:42:42 +0000 (0:00:03.033) 0:00:59.827 ********* 2025-05-14 02:43:08.893085 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:43:08.893102 | orchestrator | 2025-05-14 02:43:08.893120 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-05-14 02:43:08.893137 | orchestrator | Wednesday 14 May 2025 02:42:45 +0000 (0:00:02.645) 0:01:02.472 ********* 2025-05-14 02:43:08.893194 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:43:08.893226 | orchestrator | 2025-05-14 02:43:08.893239 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-14 02:43:08.893250 | orchestrator | Wednesday 14 May 2025 02:42:57 +0000 (0:00:12.807) 0:01:15.279 ********* 2025-05-14 02:43:08.893261 | orchestrator | 2025-05-14 02:43:08.893271 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-14 02:43:08.893282 | orchestrator | Wednesday 14 May 2025 02:42:58 +0000 (0:00:00.103) 0:01:15.382 ********* 2025-05-14 02:43:08.893292 | orchestrator | 2025-05-14 02:43:08.893303 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-14 02:43:08.893314 | orchestrator | Wednesday 14 May 2025 02:42:58 +0000 (0:00:00.253) 0:01:15.636 ********* 2025-05-14 02:43:08.893324 | orchestrator | 2025-05-14 02:43:08.893335 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-05-14 02:43:08.893346 | orchestrator | Wednesday 14 May 2025 02:42:58 +0000 (0:00:00.099) 0:01:15.735 ********* 2025-05-14 02:43:08.893356 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:43:08.893367 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:43:08.893378 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:43:08.893389 | orchestrator | 2025-05-14 02:43:08.893399 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:43:08.893411 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-14 02:43:08.893424 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-14 02:43:08.893435 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-14 02:43:08.893446 | orchestrator | 2025-05-14 02:43:08.893456 | orchestrator | 2025-05-14 02:43:08.893467 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:43:08.893556 | orchestrator | Wednesday 14 May 2025 02:43:08 +0000 (0:00:09.984) 0:01:25.720 ********* 2025-05-14 02:43:08.893568 | orchestrator | =============================================================================== 2025-05-14 02:43:08.893579 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.81s 2025-05-14 02:43:08.893590 | orchestrator | placement : Restart placement-api container ----------------------------- 9.98s 2025-05-14 02:43:08.893601 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.33s 2025-05-14 02:43:08.893612 | orchestrator | placement : Copying over placement.conf --------------------------------- 5.34s 2025-05-14 02:43:08.893622 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 5.02s 2025-05-14 02:43:08.893633 | orchestrator | service-ks-register : placement | Creating projects --------------------- 4.30s 2025-05-14 02:43:08.893643 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.28s 2025-05-14 02:43:08.893654 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.55s 2025-05-14 02:43:08.893665 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 3.38s 2025-05-14 02:43:08.893675 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 3.37s 2025-05-14 02:43:08.893698 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.34s 2025-05-14 02:43:08.893709 | orchestrator | placement : Creating placement databases -------------------------------- 3.03s 2025-05-14 02:43:08.893720 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.64s 2025-05-14 02:43:08.893730 | orchestrator | placement : Copying over config.json files for services ----------------- 2.29s 2025-05-14 02:43:08.893741 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 2.02s 2025-05-14 02:43:08.893751 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 1.90s 2025-05-14 02:43:08.893762 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.85s 2025-05-14 02:43:08.893773 | orchestrator | placement : Check placement containers ---------------------------------- 1.76s 2025-05-14 02:43:08.893783 | orchestrator | placement : include_tasks ----------------------------------------------- 1.64s 2025-05-14 02:43:08.893794 | orchestrator | placement : Copying over existing policy file --------------------------- 1.17s 2025-05-14 02:43:08.893805 | orchestrator | 2025-05-14 02:43:08 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:43:08.893816 | orchestrator | 2025-05-14 02:43:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:43:11.924879 | orchestrator | 2025-05-14 02:43:11 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:43:11.925658 | orchestrator | 2025-05-14 02:43:11 | INFO  | Task ba538f54-10db-45d5-95d8-a6dc56bdb55a is in state STARTED 2025-05-14 02:43:11.926730 | orchestrator | 2025-05-14 02:43:11 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:43:11.928834 | orchestrator | 2025-05-14 02:43:11 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:43:11.930462 | orchestrator | 2025-05-14 02:43:11 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:43:11.930579 | orchestrator | 2025-05-14 02:43:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:43:14.960202 | orchestrator | 2025-05-14 02:43:14 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:43:14.963132 | orchestrator | 2025-05-14 02:43:14 | INFO  | Task ba538f54-10db-45d5-95d8-a6dc56bdb55a is in state STARTED 2025-05-14 02:43:14.963190 | orchestrator | 2025-05-14 02:43:14 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:43:14.966955 | orchestrator | 2025-05-14 02:43:14 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:43:14.967018 | orchestrator | 2025-05-14 02:43:14 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:43:14.967031 | orchestrator | 2025-05-14 02:43:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:43:17.996785 | orchestrator | 2025-05-14 02:43:17 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:43:17.997746 | orchestrator | 2025-05-14 02:43:17 | INFO  | Task ba538f54-10db-45d5-95d8-a6dc56bdb55a is in state STARTED 2025-05-14 02:43:17.999152 | orchestrator | 2025-05-14 02:43:17 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:43:18.000860 | orchestrator | 2025-05-14 02:43:17 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:43:18.002311 | orchestrator | 2025-05-14 02:43:18 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:43:18.002389 | orchestrator | 2025-05-14 02:43:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:43:21.048212 | orchestrator | 2025-05-14 02:43:21 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:43:21.048756 | orchestrator | 2025-05-14 02:43:21 | INFO  | Task ba538f54-10db-45d5-95d8-a6dc56bdb55a is in state STARTED 2025-05-14 02:43:21.050674 | orchestrator | 2025-05-14 02:43:21 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:43:21.051636 | orchestrator | 2025-05-14 02:43:21 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:43:21.052972 | orchestrator | 2025-05-14 02:43:21 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:43:21.053019 | orchestrator | 2025-05-14 02:43:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:43:24.082213 | orchestrator | 2025-05-14 02:43:24 | INFO  | Task f2ebee91-2931-4509-97d5-4ad129dae2f2 is in state STARTED 2025-05-14 02:43:24.082346 | orchestrator | 2025-05-14 02:43:24 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:43:24.083222 | orchestrator | 2025-05-14 02:43:24 | INFO  | Task ba538f54-10db-45d5-95d8-a6dc56bdb55a is in state STARTED 2025-05-14 02:43:24.083743 | orchestrator | 2025-05-14 02:43:24 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:43:24.084869 | orchestrator | 2025-05-14 02:43:24 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:43:24.085623 | orchestrator | 2025-05-14 02:43:24 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:43:24.085646 | orchestrator | 2025-05-14 02:43:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:43:27.120758 | orchestrator | 2025-05-14 02:43:27 | INFO  | Task f2ebee91-2931-4509-97d5-4ad129dae2f2 is in state STARTED 2025-05-14 02:43:27.121394 | orchestrator | 2025-05-14 02:43:27 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:43:27.121949 | orchestrator | 2025-05-14 02:43:27 | INFO  | Task ba538f54-10db-45d5-95d8-a6dc56bdb55a is in state STARTED 2025-05-14 02:43:27.127317 | orchestrator | 2025-05-14 02:43:27 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:43:27.134105 | orchestrator | 2025-05-14 02:43:27 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:43:27.134199 | orchestrator | 2025-05-14 02:43:27 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:43:27.134248 | orchestrator | 2025-05-14 02:43:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:43:30.163927 | orchestrator | 2025-05-14 02:43:30 | INFO  | Task f2ebee91-2931-4509-97d5-4ad129dae2f2 is in state STARTED 2025-05-14 02:43:30.165054 | orchestrator | 2025-05-14 02:43:30 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:43:30.165650 | orchestrator | 2025-05-14 02:43:30 | INFO  | Task ba538f54-10db-45d5-95d8-a6dc56bdb55a is in state STARTED 2025-05-14 02:43:30.172355 | orchestrator | 2025-05-14 02:43:30 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:43:30.173754 | orchestrator | 2025-05-14 02:43:30 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state STARTED 2025-05-14 02:43:30.176686 | orchestrator | 2025-05-14 02:43:30 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:43:30.176726 | orchestrator | 2025-05-14 02:43:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:43:33.210412 | orchestrator | 2025-05-14 02:43:33 | INFO  | Task f2ebee91-2931-4509-97d5-4ad129dae2f2 is in state STARTED 2025-05-14 02:43:33.210973 | orchestrator | 2025-05-14 02:43:33 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:43:33.211829 | orchestrator | 2025-05-14 02:43:33 | INFO  | Task ba538f54-10db-45d5-95d8-a6dc56bdb55a is in state STARTED 2025-05-14 02:43:33.214938 | orchestrator | 2025-05-14 02:43:33 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:43:33.215759 | orchestrator | 2025-05-14 02:43:33 | INFO  | Task 57c8bd42-36e4-48ad-96db-915fa86e149a is in state SUCCESS 2025-05-14 02:43:33.217078 | orchestrator | 2025-05-14 02:43:33.217124 | orchestrator | 2025-05-14 02:43:33.217138 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:43:33.217152 | orchestrator | 2025-05-14 02:43:33.217164 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:43:33.217176 | orchestrator | Wednesday 14 May 2025 02:41:04 +0000 (0:00:00.336) 0:00:00.336 ********* 2025-05-14 02:43:33.217189 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:43:33.217203 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:43:33.217215 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:43:33.217227 | orchestrator | 2025-05-14 02:43:33.217314 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:43:33.217330 | orchestrator | Wednesday 14 May 2025 02:41:05 +0000 (0:00:00.856) 0:00:01.193 ********* 2025-05-14 02:43:33.217343 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-05-14 02:43:33.217356 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-05-14 02:43:33.217368 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-05-14 02:43:33.217381 | orchestrator | 2025-05-14 02:43:33.217393 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-05-14 02:43:33.217405 | orchestrator | 2025-05-14 02:43:33.217418 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-14 02:43:33.217430 | orchestrator | Wednesday 14 May 2025 02:41:06 +0000 (0:00:00.538) 0:00:01.731 ********* 2025-05-14 02:43:33.217445 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:43:33.217459 | orchestrator | 2025-05-14 02:43:33.217523 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-05-14 02:43:33.217537 | orchestrator | Wednesday 14 May 2025 02:41:06 +0000 (0:00:00.711) 0:00:02.443 ********* 2025-05-14 02:43:33.217550 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-05-14 02:43:33.217563 | orchestrator | 2025-05-14 02:43:33.217576 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-05-14 02:43:33.217589 | orchestrator | Wednesday 14 May 2025 02:41:10 +0000 (0:00:03.573) 0:00:06.016 ********* 2025-05-14 02:43:33.217601 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-05-14 02:43:33.217614 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-05-14 02:43:33.217622 | orchestrator | 2025-05-14 02:43:33.217629 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-05-14 02:43:33.217640 | orchestrator | Wednesday 14 May 2025 02:41:17 +0000 (0:00:06.836) 0:00:12.853 ********* 2025-05-14 02:43:33.217653 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating projects (5 retries left). 2025-05-14 02:43:33.217665 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-14 02:43:33.217677 | orchestrator | 2025-05-14 02:43:33.217689 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-05-14 02:43:33.217701 | orchestrator | Wednesday 14 May 2025 02:41:33 +0000 (0:00:16.515) 0:00:29.369 ********* 2025-05-14 02:43:33.217714 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-14 02:43:33.217726 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-05-14 02:43:33.217737 | orchestrator | 2025-05-14 02:43:33.217749 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-05-14 02:43:33.217786 | orchestrator | Wednesday 14 May 2025 02:41:37 +0000 (0:00:03.929) 0:00:33.298 ********* 2025-05-14 02:43:33.217800 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-14 02:43:33.217812 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-05-14 02:43:33.217826 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-05-14 02:43:33.217838 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-05-14 02:43:33.217865 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-05-14 02:43:33.217873 | orchestrator | 2025-05-14 02:43:33.217880 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-05-14 02:43:33.217888 | orchestrator | Wednesday 14 May 2025 02:41:53 +0000 (0:00:15.977) 0:00:49.275 ********* 2025-05-14 02:43:33.217895 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-05-14 02:43:33.217902 | orchestrator | 2025-05-14 02:43:33.217909 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-05-14 02:43:33.217916 | orchestrator | Wednesday 14 May 2025 02:41:58 +0000 (0:00:04.544) 0:00:53.820 ********* 2025-05-14 02:43:33.217944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 02:43:33.217955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 02:43:33.217963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 02:43:33.217982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:33.217994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:33.218002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:33.218061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:33.218075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:33.218083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:33.218097 | orchestrator | 2025-05-14 02:43:33.218105 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-05-14 02:43:33.218112 | orchestrator | Wednesday 14 May 2025 02:42:01 +0000 (0:00:03.774) 0:00:57.595 ********* 2025-05-14 02:43:33.218119 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-05-14 02:43:33.218126 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-05-14 02:43:33.218133 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-05-14 02:43:33.218141 | orchestrator | 2025-05-14 02:43:33.218148 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-05-14 02:43:33.218155 | orchestrator | Wednesday 14 May 2025 02:42:04 +0000 (0:00:02.419) 0:01:00.014 ********* 2025-05-14 02:43:33.218162 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:43:33.218169 | orchestrator | 2025-05-14 02:43:33.218177 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-05-14 02:43:33.218184 | orchestrator | Wednesday 14 May 2025 02:42:04 +0000 (0:00:00.206) 0:01:00.221 ********* 2025-05-14 02:43:33.218190 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:43:33.218202 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:43:33.218209 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:43:33.218216 | orchestrator | 2025-05-14 02:43:33.218223 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-14 02:43:33.218231 | orchestrator | Wednesday 14 May 2025 02:42:05 +0000 (0:00:01.280) 0:01:01.502 ********* 2025-05-14 02:43:33.218238 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:43:33.218246 | orchestrator | 2025-05-14 02:43:33.218253 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-05-14 02:43:33.218260 | orchestrator | Wednesday 14 May 2025 02:42:07 +0000 (0:00:01.618) 0:01:03.120 ********* 2025-05-14 02:43:33.218273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 02:43:33.218282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 02:43:33.218295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 02:43:33.218308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:33.218322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:33.218333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:33.218352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:33.218364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:33.218383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:33.218396 | orchestrator | 2025-05-14 02:43:33.218408 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-05-14 02:43:33.218420 | orchestrator | Wednesday 14 May 2025 02:42:12 +0000 (0:00:04.753) 0:01:07.873 ********* 2025-05-14 02:43:33.218438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-14 02:43:33.218490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-14 02:43:33.218510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-14 02:43:33.218535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-14 02:43:33.218548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:43:33.218561 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:43:33.218573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:43:33.218598 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:43:33.218609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-14 02:43:33.218629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-14 02:43:33.218640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:43:33.218660 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:43:33.218672 | orchestrator | 2025-05-14 02:43:33.218683 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-05-14 02:43:33.218696 | orchestrator | Wednesday 14 May 2025 02:42:13 +0000 (0:00:01.566) 0:01:09.440 ********* 2025-05-14 02:43:33.218708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-14 02:43:33.218727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-14 02:43:33.218740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:43:33.218752 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:43:33.218770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-14 02:43:33.218791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-14 02:43:33.218803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:43:33.218815 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:43:33.218831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-14 02:43:33.218844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-14 02:43:33.218856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:43:33.218866 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:43:33.218885 | orchestrator | 2025-05-14 02:43:33.218903 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-05-14 02:43:33.218915 | orchestrator | Wednesday 14 May 2025 02:42:16 +0000 (0:00:02.378) 0:01:11.819 ********* 2025-05-14 02:43:33.218927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 02:43:33.218940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 02:43:33.218958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 02:43:33.218972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:33.218991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:33.219013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:33.219025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:33.219038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:33.219055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:33.219068 | orchestrator | 2025-05-14 02:43:33.219081 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-05-14 02:43:33.219093 | orchestrator | Wednesday 14 May 2025 02:42:20 +0000 (0:00:04.785) 0:01:16.604 ********* 2025-05-14 02:43:33.219105 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:43:33.219117 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:43:33.219128 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:43:33.219140 | orchestrator | 2025-05-14 02:43:33.219152 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-05-14 02:43:33.219164 | orchestrator | Wednesday 14 May 2025 02:42:24 +0000 (0:00:03.454) 0:01:20.058 ********* 2025-05-14 02:43:33.219222 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-14 02:43:33.219236 | orchestrator | 2025-05-14 02:43:33.219248 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-05-14 02:43:33.219260 | orchestrator | Wednesday 14 May 2025 02:42:27 +0000 (0:00:02.927) 0:01:22.986 ********* 2025-05-14 02:43:33.219271 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:43:33.219278 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:43:33.219285 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:43:33.219293 | orchestrator | 2025-05-14 02:43:33.219300 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-05-14 02:43:33.219310 | orchestrator | Wednesday 14 May 2025 02:42:29 +0000 (0:00:02.374) 0:01:25.360 ********* 2025-05-14 02:43:33.219334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 02:43:33.219349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 02:43:33.219368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 02:43:33.219403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:33.219432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:33.219445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:33.219457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:33.219569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:33.219580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:33.219588 | orchestrator | 2025-05-14 02:43:33.219600 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-05-14 02:43:33.219608 | orchestrator | Wednesday 14 May 2025 02:42:41 +0000 (0:00:11.722) 0:01:37.083 ********* 2025-05-14 02:43:33.219622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-14 02:43:33.219639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-14 02:43:33.219647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:43:33.219655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-14 02:43:33.219667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-14 02:43:33.219683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:43:33.219691 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:43:33.219698 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:43:33.219712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-14 02:43:33.219720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-14 02:43:33.219728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:43:33.219735 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:43:33.219742 | orchestrator | 2025-05-14 02:43:33.219749 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-05-14 02:43:33.219757 | orchestrator | Wednesday 14 May 2025 02:42:42 +0000 (0:00:01.470) 0:01:38.554 ********* 2025-05-14 02:43:33.219767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 02:43:33.219780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 02:43:33.219794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-14 02:43:33.219803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:33.219810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:33.219821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:33.219834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:33.219841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:33.219855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:43:33.219862 | orchestrator | 2025-05-14 02:43:33.219869 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-14 02:43:33.219877 | orchestrator | Wednesday 14 May 2025 02:42:46 +0000 (0:00:03.524) 0:01:42.078 ********* 2025-05-14 02:43:33.219884 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:43:33.219891 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:43:33.219898 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:43:33.219905 | orchestrator | 2025-05-14 02:43:33.219912 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-05-14 02:43:33.219919 | orchestrator | Wednesday 14 May 2025 02:42:46 +0000 (0:00:00.420) 0:01:42.498 ********* 2025-05-14 02:43:33.219926 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:43:33.219933 | orchestrator | 2025-05-14 02:43:33.219941 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-05-14 02:43:33.219948 | orchestrator | Wednesday 14 May 2025 02:42:49 +0000 (0:00:02.636) 0:01:45.135 ********* 2025-05-14 02:43:33.219954 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:43:33.219962 | orchestrator | 2025-05-14 02:43:33.219969 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-05-14 02:43:33.219976 | orchestrator | Wednesday 14 May 2025 02:42:51 +0000 (0:00:02.377) 0:01:47.512 ********* 2025-05-14 02:43:33.219983 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:43:33.219990 | orchestrator | 2025-05-14 02:43:33.219997 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-14 02:43:33.220009 | orchestrator | Wednesday 14 May 2025 02:43:02 +0000 (0:00:10.795) 0:01:58.308 ********* 2025-05-14 02:43:33.220016 | orchestrator | 2025-05-14 02:43:33.220023 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-14 02:43:33.220030 | orchestrator | Wednesday 14 May 2025 02:43:02 +0000 (0:00:00.063) 0:01:58.372 ********* 2025-05-14 02:43:33.220037 | orchestrator | 2025-05-14 02:43:33.220044 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-14 02:43:33.220051 | orchestrator | Wednesday 14 May 2025 02:43:02 +0000 (0:00:00.203) 0:01:58.575 ********* 2025-05-14 02:43:33.220058 | orchestrator | 2025-05-14 02:43:33.220065 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-05-14 02:43:33.220072 | orchestrator | Wednesday 14 May 2025 02:43:02 +0000 (0:00:00.057) 0:01:58.633 ********* 2025-05-14 02:43:33.220079 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:43:33.220086 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:43:33.220093 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:43:33.220100 | orchestrator | 2025-05-14 02:43:33.220107 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-05-14 02:43:33.220114 | orchestrator | Wednesday 14 May 2025 02:43:10 +0000 (0:00:07.127) 0:02:05.760 ********* 2025-05-14 02:43:33.220121 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:43:33.220128 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:43:33.220135 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:43:33.220142 | orchestrator | 2025-05-14 02:43:33.220149 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-05-14 02:43:33.220157 | orchestrator | Wednesday 14 May 2025 02:43:23 +0000 (0:00:13.655) 0:02:19.416 ********* 2025-05-14 02:43:33.220167 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:43:33.220174 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:43:33.220181 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:43:33.220189 | orchestrator | 2025-05-14 02:43:33.220196 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:43:33.220204 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-14 02:43:33.220212 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-14 02:43:33.220219 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-14 02:43:33.220227 | orchestrator | 2025-05-14 02:43:33.220234 | orchestrator | 2025-05-14 02:43:33.220244 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:43:33.220256 | orchestrator | Wednesday 14 May 2025 02:43:32 +0000 (0:00:08.737) 0:02:28.153 ********* 2025-05-14 02:43:33.220273 | orchestrator | =============================================================================== 2025-05-14 02:43:33.220290 | orchestrator | service-ks-register : barbican | Creating projects --------------------- 16.52s 2025-05-14 02:43:33.220301 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.98s 2025-05-14 02:43:33.220313 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 13.66s 2025-05-14 02:43:33.220325 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 11.72s 2025-05-14 02:43:33.220336 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 10.80s 2025-05-14 02:43:33.220347 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 8.74s 2025-05-14 02:43:33.220366 | orchestrator | barbican : Restart barbican-api container ------------------------------- 7.13s 2025-05-14 02:43:33.220379 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.84s 2025-05-14 02:43:33.220391 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.79s 2025-05-14 02:43:33.220403 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.75s 2025-05-14 02:43:33.220425 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.54s 2025-05-14 02:43:33.220433 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.93s 2025-05-14 02:43:33.220440 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 3.77s 2025-05-14 02:43:33.220447 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.57s 2025-05-14 02:43:33.220453 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.52s 2025-05-14 02:43:33.220461 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 3.45s 2025-05-14 02:43:33.220608 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 2.93s 2025-05-14 02:43:33.220625 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.64s 2025-05-14 02:43:33.220632 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 2.42s 2025-05-14 02:43:33.220639 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 2.38s 2025-05-14 02:43:33.220646 | orchestrator | 2025-05-14 02:43:33 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:43:33.220654 | orchestrator | 2025-05-14 02:43:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:43:36.251046 | orchestrator | 2025-05-14 02:43:36 | INFO  | Task f2ebee91-2931-4509-97d5-4ad129dae2f2 is in state SUCCESS 2025-05-14 02:43:36.251145 | orchestrator | 2025-05-14 02:43:36 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:43:36.251611 | orchestrator | 2025-05-14 02:43:36 | INFO  | Task ba538f54-10db-45d5-95d8-a6dc56bdb55a is in state STARTED 2025-05-14 02:43:36.252186 | orchestrator | 2025-05-14 02:43:36 | INFO  | Task 93c7a965-30e6-4710-a510-2f1a818b9ab5 is in state STARTED 2025-05-14 02:43:36.252894 | orchestrator | 2025-05-14 02:43:36 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:43:36.253278 | orchestrator | 2025-05-14 02:43:36 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:43:36.254693 | orchestrator | 2025-05-14 02:43:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:43:39.276386 | orchestrator | 2025-05-14 02:43:39 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:43:39.276623 | orchestrator | 2025-05-14 02:43:39 | INFO  | Task ba538f54-10db-45d5-95d8-a6dc56bdb55a is in state STARTED 2025-05-14 02:43:39.277130 | orchestrator | 2025-05-14 02:43:39 | INFO  | Task 93c7a965-30e6-4710-a510-2f1a818b9ab5 is in state STARTED 2025-05-14 02:43:39.277736 | orchestrator | 2025-05-14 02:43:39 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:43:39.278126 | orchestrator | 2025-05-14 02:43:39 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:43:39.278151 | orchestrator | 2025-05-14 02:43:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:43:42.302137 | orchestrator | 2025-05-14 02:43:42 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:43:42.303643 | orchestrator | 2025-05-14 02:43:42 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:43:42.313007 | orchestrator | 2025-05-14 02:43:42 | INFO  | Task ba538f54-10db-45d5-95d8-a6dc56bdb55a is in state STARTED 2025-05-14 02:43:42.313078 | orchestrator | 2025-05-14 02:43:42 | INFO  | Task 93c7a965-30e6-4710-a510-2f1a818b9ab5 is in state SUCCESS 2025-05-14 02:43:42.314909 | orchestrator | 2025-05-14 02:43:42 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:43:42.316097 | orchestrator | 2025-05-14 02:43:42 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:43:42.316161 | orchestrator | 2025-05-14 02:43:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:43:45.346168 | orchestrator | 2025-05-14 02:43:45 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:43:45.346251 | orchestrator | 2025-05-14 02:43:45 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:43:45.346573 | orchestrator | 2025-05-14 02:43:45 | INFO  | Task ba538f54-10db-45d5-95d8-a6dc56bdb55a is in state STARTED 2025-05-14 02:43:45.348402 | orchestrator | 2025-05-14 02:43:45 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:43:45.348676 | orchestrator | 2025-05-14 02:43:45 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:43:45.348701 | orchestrator | 2025-05-14 02:43:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:43:48.377924 | orchestrator | 2025-05-14 02:43:48 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:43:48.378090 | orchestrator | 2025-05-14 02:43:48 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:43:48.378542 | orchestrator | 2025-05-14 02:43:48 | INFO  | Task ba538f54-10db-45d5-95d8-a6dc56bdb55a is in state STARTED 2025-05-14 02:43:48.378916 | orchestrator | 2025-05-14 02:43:48 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:43:48.379848 | orchestrator | 2025-05-14 02:43:48 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:43:48.379914 | orchestrator | 2025-05-14 02:43:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:43:51.423213 | orchestrator | 2025-05-14 02:43:51 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:43:51.424160 | orchestrator | 2025-05-14 02:43:51 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:43:51.424794 | orchestrator | 2025-05-14 02:43:51 | INFO  | Task ba538f54-10db-45d5-95d8-a6dc56bdb55a is in state STARTED 2025-05-14 02:43:51.425229 | orchestrator | 2025-05-14 02:43:51 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:43:51.425938 | orchestrator | 2025-05-14 02:43:51 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:43:51.425965 | orchestrator | 2025-05-14 02:43:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:43:54.455954 | orchestrator | 2025-05-14 02:43:54 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:43:54.456035 | orchestrator | 2025-05-14 02:43:54 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:43:54.456484 | orchestrator | 2025-05-14 02:43:54 | INFO  | Task ba538f54-10db-45d5-95d8-a6dc56bdb55a is in state STARTED 2025-05-14 02:43:54.456969 | orchestrator | 2025-05-14 02:43:54 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:43:54.457567 | orchestrator | 2025-05-14 02:43:54 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:43:54.457578 | orchestrator | 2025-05-14 02:43:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:43:57.489062 | orchestrator | 2025-05-14 02:43:57 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:43:57.489163 | orchestrator | 2025-05-14 02:43:57 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:43:57.489196 | orchestrator | 2025-05-14 02:43:57 | INFO  | Task ba538f54-10db-45d5-95d8-a6dc56bdb55a is in state STARTED 2025-05-14 02:43:57.489233 | orchestrator | 2025-05-14 02:43:57 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:43:57.489943 | orchestrator | 2025-05-14 02:43:57 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:43:57.490068 | orchestrator | 2025-05-14 02:43:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:44:00.517732 | orchestrator | 2025-05-14 02:44:00 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:44:00.517965 | orchestrator | 2025-05-14 02:44:00 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:44:00.519317 | orchestrator | 2025-05-14 02:44:00 | INFO  | Task ba538f54-10db-45d5-95d8-a6dc56bdb55a is in state STARTED 2025-05-14 02:44:00.522798 | orchestrator | 2025-05-14 02:44:00 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:44:00.523421 | orchestrator | 2025-05-14 02:44:00 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:44:00.523490 | orchestrator | 2025-05-14 02:44:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:44:03.550159 | orchestrator | 2025-05-14 02:44:03 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:44:03.550232 | orchestrator | 2025-05-14 02:44:03 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:44:03.550749 | orchestrator | 2025-05-14 02:44:03 | INFO  | Task ba538f54-10db-45d5-95d8-a6dc56bdb55a is in state STARTED 2025-05-14 02:44:03.551351 | orchestrator | 2025-05-14 02:44:03 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:44:03.552089 | orchestrator | 2025-05-14 02:44:03 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:44:03.552134 | orchestrator | 2025-05-14 02:44:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:44:06.584807 | orchestrator | 2025-05-14 02:44:06 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:44:06.584871 | orchestrator | 2025-05-14 02:44:06 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:44:06.584886 | orchestrator | 2025-05-14 02:44:06 | INFO  | Task ba538f54-10db-45d5-95d8-a6dc56bdb55a is in state STARTED 2025-05-14 02:44:06.584974 | orchestrator | 2025-05-14 02:44:06 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:44:06.585814 | orchestrator | 2025-05-14 02:44:06 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:44:06.585857 | orchestrator | 2025-05-14 02:44:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:44:09.620405 | orchestrator | 2025-05-14 02:44:09 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:44:09.620549 | orchestrator | 2025-05-14 02:44:09 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:44:09.620974 | orchestrator | 2025-05-14 02:44:09 | INFO  | Task ba538f54-10db-45d5-95d8-a6dc56bdb55a is in state STARTED 2025-05-14 02:44:09.621564 | orchestrator | 2025-05-14 02:44:09 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:44:09.623039 | orchestrator | 2025-05-14 02:44:09 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:44:09.623075 | orchestrator | 2025-05-14 02:44:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:44:12.666729 | orchestrator | 2025-05-14 02:44:12 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:44:12.669102 | orchestrator | 2025-05-14 02:44:12 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:44:12.672767 | orchestrator | 2025-05-14 02:44:12 | INFO  | Task ba538f54-10db-45d5-95d8-a6dc56bdb55a is in state STARTED 2025-05-14 02:44:12.675844 | orchestrator | 2025-05-14 02:44:12 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:44:12.676505 | orchestrator | 2025-05-14 02:44:12 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:44:12.678628 | orchestrator | 2025-05-14 02:44:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:44:15.716401 | orchestrator | 2025-05-14 02:44:15 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:44:15.717002 | orchestrator | 2025-05-14 02:44:15 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:44:15.717752 | orchestrator | 2025-05-14 02:44:15 | INFO  | Task ba538f54-10db-45d5-95d8-a6dc56bdb55a is in state STARTED 2025-05-14 02:44:15.718217 | orchestrator | 2025-05-14 02:44:15 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state STARTED 2025-05-14 02:44:15.719997 | orchestrator | 2025-05-14 02:44:15 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:44:15.720038 | orchestrator | 2025-05-14 02:44:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:44:18.751307 | orchestrator | 2025-05-14 02:44:18 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:44:18.751700 | orchestrator | 2025-05-14 02:44:18 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:44:18.755674 | orchestrator | 2025-05-14 02:44:18 | INFO  | Task ba538f54-10db-45d5-95d8-a6dc56bdb55a is in state STARTED 2025-05-14 02:44:18.760412 | orchestrator | 2025-05-14 02:44:18.760524 | orchestrator | None 2025-05-14 02:44:18.760540 | orchestrator | 2025-05-14 02:44:18.760553 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:44:18.760565 | orchestrator | 2025-05-14 02:44:18.760576 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:44:18.760587 | orchestrator | Wednesday 14 May 2025 02:43:37 +0000 (0:00:00.497) 0:00:00.497 ********* 2025-05-14 02:44:18.760599 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:44:18.760610 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:44:18.760621 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:44:18.760632 | orchestrator | 2025-05-14 02:44:18.760643 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:44:18.760654 | orchestrator | Wednesday 14 May 2025 02:43:37 +0000 (0:00:00.538) 0:00:01.035 ********* 2025-05-14 02:44:18.760665 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-05-14 02:44:18.760677 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-05-14 02:44:18.760687 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-05-14 02:44:18.760698 | orchestrator | 2025-05-14 02:44:18.760709 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-05-14 02:44:18.760720 | orchestrator | 2025-05-14 02:44:18.760731 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-05-14 02:44:18.760741 | orchestrator | Wednesday 14 May 2025 02:43:38 +0000 (0:00:00.647) 0:00:01.683 ********* 2025-05-14 02:44:18.760752 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:44:18.760763 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:44:18.760773 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:44:18.760784 | orchestrator | 2025-05-14 02:44:18.760795 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:44:18.760807 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:44:18.760843 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:44:18.760855 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:44:18.760866 | orchestrator | 2025-05-14 02:44:18.760946 | orchestrator | 2025-05-14 02:44:18.760963 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:44:18.760976 | orchestrator | Wednesday 14 May 2025 02:43:39 +0000 (0:00:00.789) 0:00:02.472 ********* 2025-05-14 02:44:18.760988 | orchestrator | =============================================================================== 2025-05-14 02:44:18.761001 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.79s 2025-05-14 02:44:18.761014 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.65s 2025-05-14 02:44:18.761166 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.54s 2025-05-14 02:44:18.761183 | orchestrator | 2025-05-14 02:44:18.761196 | orchestrator | 2025-05-14 02:44:18.761209 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:44:18.761221 | orchestrator | 2025-05-14 02:44:18.761235 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:44:18.761253 | orchestrator | Wednesday 14 May 2025 02:41:04 +0000 (0:00:00.436) 0:00:00.436 ********* 2025-05-14 02:44:18.761272 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:44:18.761290 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:44:18.761307 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:44:18.761325 | orchestrator | 2025-05-14 02:44:18.761342 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:44:18.761360 | orchestrator | Wednesday 14 May 2025 02:41:05 +0000 (0:00:00.745) 0:00:01.182 ********* 2025-05-14 02:44:18.761379 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-05-14 02:44:18.761397 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-05-14 02:44:18.761416 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-05-14 02:44:18.761434 | orchestrator | 2025-05-14 02:44:18.761479 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-05-14 02:44:18.761497 | orchestrator | 2025-05-14 02:44:18.761509 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-14 02:44:18.761520 | orchestrator | Wednesday 14 May 2025 02:41:05 +0000 (0:00:00.633) 0:00:01.815 ********* 2025-05-14 02:44:18.761531 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:44:18.761542 | orchestrator | 2025-05-14 02:44:18.761554 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-05-14 02:44:18.761565 | orchestrator | Wednesday 14 May 2025 02:41:06 +0000 (0:00:00.957) 0:00:02.772 ********* 2025-05-14 02:44:18.761576 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-05-14 02:44:18.761587 | orchestrator | 2025-05-14 02:44:18.761613 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-05-14 02:44:18.761624 | orchestrator | Wednesday 14 May 2025 02:41:10 +0000 (0:00:03.418) 0:00:06.190 ********* 2025-05-14 02:44:18.761695 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-05-14 02:44:18.761798 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-05-14 02:44:18.761809 | orchestrator | 2025-05-14 02:44:18.761820 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-05-14 02:44:18.761831 | orchestrator | Wednesday 14 May 2025 02:41:17 +0000 (0:00:06.885) 0:00:13.076 ********* 2025-05-14 02:44:18.761842 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-05-14 02:44:18.761853 | orchestrator | 2025-05-14 02:44:18.761863 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-05-14 02:44:18.761888 | orchestrator | Wednesday 14 May 2025 02:41:20 +0000 (0:00:03.380) 0:00:16.456 ********* 2025-05-14 02:44:18.761919 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-14 02:44:18.761931 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-05-14 02:44:18.761941 | orchestrator | 2025-05-14 02:44:18.761952 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-05-14 02:44:18.761963 | orchestrator | Wednesday 14 May 2025 02:41:24 +0000 (0:00:03.886) 0:00:20.343 ********* 2025-05-14 02:44:18.761973 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-14 02:44:18.761984 | orchestrator | 2025-05-14 02:44:18.761995 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-05-14 02:44:18.762005 | orchestrator | Wednesday 14 May 2025 02:41:27 +0000 (0:00:03.115) 0:00:23.458 ********* 2025-05-14 02:44:18.762071 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-05-14 02:44:18.762086 | orchestrator | 2025-05-14 02:44:18.762097 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-05-14 02:44:18.762107 | orchestrator | Wednesday 14 May 2025 02:41:31 +0000 (0:00:04.122) 0:00:27.580 ********* 2025-05-14 02:44:18.762123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 02:44:18.762139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 02:44:18.762158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 02:44:18.762171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 02:44:18.762203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 02:44:18.762216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 02:44:18.762229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.762241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.762252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.762269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.762295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.762307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.762318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.762329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.762340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.762351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.762425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.762471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.762492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.762511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.762530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.762549 | orchestrator | 2025-05-14 02:44:18.762566 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-05-14 02:44:18.762578 | orchestrator | Wednesday 14 May 2025 02:41:34 +0000 (0:00:03.244) 0:00:30.825 ********* 2025-05-14 02:44:18.762589 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:44:18.762653 | orchestrator | 2025-05-14 02:44:18.762679 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-05-14 02:44:18.762705 | orchestrator | Wednesday 14 May 2025 02:41:34 +0000 (0:00:00.118) 0:00:30.943 ********* 2025-05-14 02:44:18.762723 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:44:18.762741 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:44:18.762759 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:44:18.762778 | orchestrator | 2025-05-14 02:44:18.762796 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-14 02:44:18.762875 | orchestrator | Wednesday 14 May 2025 02:41:35 +0000 (0:00:00.324) 0:00:31.268 ********* 2025-05-14 02:44:18.762889 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:44:18.762900 | orchestrator | 2025-05-14 02:44:18.762911 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-05-14 02:44:18.762922 | orchestrator | Wednesday 14 May 2025 02:41:35 +0000 (0:00:00.503) 0:00:31.772 ********* 2025-05-14 02:44:18.762942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 02:44:18.762966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 02:44:18.762979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 02:44:18.762991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 02:44:18.763002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 02:44:18.763025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 02:44:18.763057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.763069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.763081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.763092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.763104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.763122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.763146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.763167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.763178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.763190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.763201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.763254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.763267 | orchestrator | 2025-05-14 02:44:18.763279 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-05-14 02:44:18.763290 | orchestrator | Wednesday 14 May 2025 02:41:41 +0000 (0:00:05.808) 0:00:37.580 ********* 2025-05-14 02:44:18.763307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 02:44:18.763327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_bac2025-05-14 02:44:18 | INFO  | Task 709f36ae-dd2c-46a5-b6a8-1bfc91e34c02 is in state SUCCESS 2025-05-14 02:44:18.763340 | orchestrator | kend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-14 02:44:18.763353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.763508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 02:44:18.763536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.763548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.763566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-14 02:44:18.763589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.763602 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:44:18.763614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.763625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.763637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.763654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.763666 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:44:18.763682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 02:44:18.763701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-14 02:44:18.763713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.763724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.763742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.763753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.763767 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:44:18.763786 | orchestrator | 2025-05-14 02:44:18.763805 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-05-14 02:44:18.763824 | orchestrator | Wednesday 14 May 2025 02:41:43 +0000 (0:00:01.874) 0:00:39.454 ********* 2025-05-14 02:44:18.763844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 02:44:18.763918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-14 02:44:18.763943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 02:44:18.763974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.763994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-14 02:44:18.764013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.764042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.764075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.764095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.764116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.764148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.764170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.764189 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:44:18.764209 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:44:18.764237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 02:44:18.764266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-14 02:44:18.764283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.764295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.764314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.764327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.764338 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:44:18.764349 | orchestrator | 2025-05-14 02:44:18.764360 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-05-14 02:44:18.764371 | orchestrator | Wednesday 14 May 2025 02:41:44 +0000 (0:00:01.523) 0:00:40.977 ********* 2025-05-14 02:44:18.764388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 02:44:18.764408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 02:44:18.764420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 02:44:18.764443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 02:44:18.764488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 02:44:18.764500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 02:44:18.764565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.764588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.764609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.764621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.764632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.764644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.764661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.764688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.764721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.764750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.764770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.764789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.764808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.764838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.764860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.764892 | orchestrator | 2025-05-14 02:44:18.764910 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-05-14 02:44:18.764930 | orchestrator | Wednesday 14 May 2025 02:41:51 +0000 (0:00:06.421) 0:00:47.398 ********* 2025-05-14 02:44:18.764943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 02:44:18.764955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 02:44:18.764966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 02:44:18.764983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 02:44:18.764995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 02:44:18.765020 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 02:44:18.765033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.765044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.765056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.765067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.765083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.765108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.765120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.765131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.765143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.765154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.765165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.765181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.765207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.765220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.765231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.765242 | orchestrator | 2025-05-14 02:44:18.765253 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-05-14 02:44:18.765264 | orchestrator | Wednesday 14 May 2025 02:42:18 +0000 (0:00:27.114) 0:01:14.513 ********* 2025-05-14 02:44:18.765275 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-14 02:44:18.765286 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-14 02:44:18.765297 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-14 02:44:18.765308 | orchestrator | 2025-05-14 02:44:18.765319 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-05-14 02:44:18.765330 | orchestrator | Wednesday 14 May 2025 02:42:26 +0000 (0:00:08.519) 0:01:23.032 ********* 2025-05-14 02:44:18.765341 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-14 02:44:18.765352 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-14 02:44:18.765362 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-14 02:44:18.765373 | orchestrator | 2025-05-14 02:44:18.765383 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-05-14 02:44:18.765394 | orchestrator | Wednesday 14 May 2025 02:42:33 +0000 (0:00:06.094) 0:01:29.127 ********* 2025-05-14 02:44:18.765411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 02:44:18.765857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 02:44:18.765883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 02:44:18.765896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 02:44:18.765907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.765919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.765950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.765962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 02:44:18.765982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.765994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.766005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.766074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 02:44:18.766097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.766114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.766137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.766149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.766160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.766176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.766196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.766225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.766250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.766268 | orchestrator | 2025-05-14 02:44:18.766285 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-05-14 02:44:18.766303 | orchestrator | Wednesday 14 May 2025 02:42:38 +0000 (0:00:05.290) 0:01:34.417 ********* 2025-05-14 02:44:18.766331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 02:44:18.766352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 02:44:18.766369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 02:44:18.766398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 02:44:18.766425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.766530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.766548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.766560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 02:44:18.766571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.766597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.766625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.766647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 02:44:18.766679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.766701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.766721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.766749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.766761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.766779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.766860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.766873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.766884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.766895 | orchestrator | 2025-05-14 02:44:18.766907 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-14 02:44:18.766918 | orchestrator | Wednesday 14 May 2025 02:42:42 +0000 (0:00:03.960) 0:01:38.378 ********* 2025-05-14 02:44:18.766937 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:44:18.766949 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:44:18.766960 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:44:18.766970 | orchestrator | 2025-05-14 02:44:18.766979 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-05-14 02:44:18.766989 | orchestrator | Wednesday 14 May 2025 02:42:42 +0000 (0:00:00.625) 0:01:39.003 ********* 2025-05-14 02:44:18.767000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 02:44:18.767010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-14 02:44:18.767026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.767042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.767054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.767064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.767081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.767091 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:44:18.767101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 02:44:18.767120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-14 02:44:18.767136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.767148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.767168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.767178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.767188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.767198 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:44:18.767213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-14 02:44:18.767229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-14 02:44:18.767246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.767271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.767288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.767305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.767322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.767340 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:44:18.767355 | orchestrator | 2025-05-14 02:44:18.767379 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-05-14 02:44:18.767397 | orchestrator | Wednesday 14 May 2025 02:42:44 +0000 (0:00:01.193) 0:01:40.197 ********* 2025-05-14 02:44:18.767501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 02:44:18.767523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 02:44:18.767552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-14 02:44:18.767570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 02:44:18.767588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 02:44:18.767614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-14 02:44:18.767642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.767665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.767683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.767700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.767717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.767740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.767759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.767787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.767813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.767824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.767834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.767844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.767859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.767869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-14 02:44:18.767892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-14 02:44:18.767902 | orchestrator | 2025-05-14 02:44:18.767912 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-14 02:44:18.767922 | orchestrator | Wednesday 14 May 2025 02:42:49 +0000 (0:00:05.291) 0:01:45.489 ********* 2025-05-14 02:44:18.767932 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:44:18.767942 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:44:18.767951 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:44:18.767961 | orchestrator | 2025-05-14 02:44:18.767971 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-05-14 02:44:18.767980 | orchestrator | Wednesday 14 May 2025 02:42:50 +0000 (0:00:00.614) 0:01:46.103 ********* 2025-05-14 02:44:18.767990 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-05-14 02:44:18.768000 | orchestrator | 2025-05-14 02:44:18.768009 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-05-14 02:44:18.768019 | orchestrator | Wednesday 14 May 2025 02:42:52 +0000 (0:00:02.174) 0:01:48.278 ********* 2025-05-14 02:44:18.768028 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-14 02:44:18.768038 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-05-14 02:44:18.768047 | orchestrator | 2025-05-14 02:44:18.768057 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-05-14 02:44:18.768066 | orchestrator | Wednesday 14 May 2025 02:42:54 +0000 (0:00:02.324) 0:01:50.603 ********* 2025-05-14 02:44:18.768076 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:44:18.768086 | orchestrator | 2025-05-14 02:44:18.768103 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-14 02:44:18.768114 | orchestrator | Wednesday 14 May 2025 02:43:09 +0000 (0:00:14.610) 0:02:05.214 ********* 2025-05-14 02:44:18.768123 | orchestrator | 2025-05-14 02:44:18.768133 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-14 02:44:18.768142 | orchestrator | Wednesday 14 May 2025 02:43:09 +0000 (0:00:00.172) 0:02:05.386 ********* 2025-05-14 02:44:18.768152 | orchestrator | 2025-05-14 02:44:18.768161 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-14 02:44:18.768171 | orchestrator | Wednesday 14 May 2025 02:43:09 +0000 (0:00:00.113) 0:02:05.500 ********* 2025-05-14 02:44:18.768212 | orchestrator | 2025-05-14 02:44:18.768223 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-05-14 02:44:18.768233 | orchestrator | Wednesday 14 May 2025 02:43:09 +0000 (0:00:00.118) 0:02:05.619 ********* 2025-05-14 02:44:18.768242 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:44:18.768252 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:44:18.768261 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:44:18.768270 | orchestrator | 2025-05-14 02:44:18.768280 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-05-14 02:44:18.768289 | orchestrator | Wednesday 14 May 2025 02:43:25 +0000 (0:00:16.425) 0:02:22.045 ********* 2025-05-14 02:44:18.768299 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:44:18.768309 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:44:18.768318 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:44:18.768328 | orchestrator | 2025-05-14 02:44:18.768337 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-05-14 02:44:18.768359 | orchestrator | Wednesday 14 May 2025 02:43:40 +0000 (0:00:14.653) 0:02:36.699 ********* 2025-05-14 02:44:18.768369 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:44:18.768378 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:44:18.768388 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:44:18.768397 | orchestrator | 2025-05-14 02:44:18.768407 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-05-14 02:44:18.768416 | orchestrator | Wednesday 14 May 2025 02:43:50 +0000 (0:00:10.183) 0:02:46.882 ********* 2025-05-14 02:44:18.768425 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:44:18.768435 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:44:18.768444 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:44:18.768515 | orchestrator | 2025-05-14 02:44:18.768534 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-05-14 02:44:18.768552 | orchestrator | Wednesday 14 May 2025 02:43:57 +0000 (0:00:06.953) 0:02:53.835 ********* 2025-05-14 02:44:18.768569 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:44:18.768586 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:44:18.768603 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:44:18.768620 | orchestrator | 2025-05-14 02:44:18.768635 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-05-14 02:44:18.768650 | orchestrator | Wednesday 14 May 2025 02:44:04 +0000 (0:00:06.610) 0:03:00.446 ********* 2025-05-14 02:44:18.768660 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:44:18.768669 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:44:18.768679 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:44:18.768688 | orchestrator | 2025-05-14 02:44:18.768698 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-05-14 02:44:18.768710 | orchestrator | Wednesday 14 May 2025 02:44:11 +0000 (0:00:07.321) 0:03:07.767 ********* 2025-05-14 02:44:18.768728 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:44:18.768744 | orchestrator | 2025-05-14 02:44:18.768760 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:44:18.768787 | orchestrator | testbed-node-0 : ok=29  changed=24  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-14 02:44:18.768807 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-14 02:44:18.768823 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-14 02:44:18.768841 | orchestrator | 2025-05-14 02:44:18.768857 | orchestrator | 2025-05-14 02:44:18.768873 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:44:18.768884 | orchestrator | Wednesday 14 May 2025 02:44:17 +0000 (0:00:05.688) 0:03:13.456 ********* 2025-05-14 02:44:18.768894 | orchestrator | =============================================================================== 2025-05-14 02:44:18.768912 | orchestrator | designate : Copying over designate.conf -------------------------------- 27.11s 2025-05-14 02:44:18.768928 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 16.43s 2025-05-14 02:44:18.768945 | orchestrator | designate : Restart designate-api container ---------------------------- 14.65s 2025-05-14 02:44:18.768959 | orchestrator | designate : Running Designate bootstrap container ---------------------- 14.61s 2025-05-14 02:44:18.768973 | orchestrator | designate : Restart designate-central container ------------------------ 10.18s 2025-05-14 02:44:18.768985 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 8.52s 2025-05-14 02:44:18.768999 | orchestrator | designate : Restart designate-worker container -------------------------- 7.32s 2025-05-14 02:44:18.769013 | orchestrator | designate : Restart designate-producer container ------------------------ 6.95s 2025-05-14 02:44:18.769027 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.89s 2025-05-14 02:44:18.769052 | orchestrator | designate : Restart designate-mdns container ---------------------------- 6.61s 2025-05-14 02:44:18.769066 | orchestrator | designate : Copying over config.json files for services ----------------- 6.42s 2025-05-14 02:44:18.769079 | orchestrator | designate : Copying over named.conf ------------------------------------- 6.09s 2025-05-14 02:44:18.769089 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 5.81s 2025-05-14 02:44:18.769097 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 5.69s 2025-05-14 02:44:18.769105 | orchestrator | designate : Check designate containers ---------------------------------- 5.29s 2025-05-14 02:44:18.769112 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 5.29s 2025-05-14 02:44:18.769120 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.12s 2025-05-14 02:44:18.769128 | orchestrator | designate : Copying over rndc.key --------------------------------------- 3.96s 2025-05-14 02:44:18.769136 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.89s 2025-05-14 02:44:18.769143 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.42s 2025-05-14 02:44:18.769151 | orchestrator | 2025-05-14 02:44:18 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:44:18.769160 | orchestrator | 2025-05-14 02:44:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:44:21.799834 | orchestrator | 2025-05-14 02:44:21 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:44:21.799916 | orchestrator | 2025-05-14 02:44:21 | INFO  | Task e936af72-2e6f-4257-9861-3799442bf447 is in state STARTED 2025-05-14 02:44:21.800192 | orchestrator | 2025-05-14 02:44:21 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:44:21.800835 | orchestrator | 2025-05-14 02:44:21 | INFO  | Task ba538f54-10db-45d5-95d8-a6dc56bdb55a is in state STARTED 2025-05-14 02:44:21.801317 | orchestrator | 2025-05-14 02:44:21 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:44:21.801341 | orchestrator | 2025-05-14 02:44:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:44:24.832825 | orchestrator | 2025-05-14 02:44:24 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:44:24.833049 | orchestrator | 2025-05-14 02:44:24 | INFO  | Task e936af72-2e6f-4257-9861-3799442bf447 is in state STARTED 2025-05-14 02:44:24.833081 | orchestrator | 2025-05-14 02:44:24 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:44:24.833668 | orchestrator | 2025-05-14 02:44:24 | INFO  | Task ba538f54-10db-45d5-95d8-a6dc56bdb55a is in state STARTED 2025-05-14 02:44:24.835740 | orchestrator | 2025-05-14 02:44:24 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:44:24.835793 | orchestrator | 2025-05-14 02:44:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:44:27.864165 | orchestrator | 2025-05-14 02:44:27 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:44:27.864294 | orchestrator | 2025-05-14 02:44:27 | INFO  | Task e936af72-2e6f-4257-9861-3799442bf447 is in state STARTED 2025-05-14 02:44:27.864310 | orchestrator | 2025-05-14 02:44:27 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:44:27.864322 | orchestrator | 2025-05-14 02:44:27 | INFO  | Task ba538f54-10db-45d5-95d8-a6dc56bdb55a is in state STARTED 2025-05-14 02:44:27.864345 | orchestrator | 2025-05-14 02:44:27 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:44:27.864356 | orchestrator | 2025-05-14 02:44:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:44:30.892003 | orchestrator | 2025-05-14 02:44:30 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:44:30.892872 | orchestrator | 2025-05-14 02:44:30 | INFO  | Task e936af72-2e6f-4257-9861-3799442bf447 is in state STARTED 2025-05-14 02:44:30.893671 | orchestrator | 2025-05-14 02:44:30 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:44:30.894386 | orchestrator | 2025-05-14 02:44:30 | INFO  | Task ba538f54-10db-45d5-95d8-a6dc56bdb55a is in state STARTED 2025-05-14 02:44:30.895732 | orchestrator | 2025-05-14 02:44:30 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:44:30.895786 | orchestrator | 2025-05-14 02:44:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:44:33.928724 | orchestrator | 2025-05-14 02:44:33 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:44:33.928834 | orchestrator | 2025-05-14 02:44:33 | INFO  | Task e936af72-2e6f-4257-9861-3799442bf447 is in state STARTED 2025-05-14 02:44:33.929326 | orchestrator | 2025-05-14 02:44:33 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:44:33.929907 | orchestrator | 2025-05-14 02:44:33 | INFO  | Task ba538f54-10db-45d5-95d8-a6dc56bdb55a is in state STARTED 2025-05-14 02:44:33.930418 | orchestrator | 2025-05-14 02:44:33 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:44:33.930478 | orchestrator | 2025-05-14 02:44:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:44:36.983895 | orchestrator | 2025-05-14 02:44:36 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:44:36.984532 | orchestrator | 2025-05-14 02:44:36 | INFO  | Task e936af72-2e6f-4257-9861-3799442bf447 is in state STARTED 2025-05-14 02:44:36.986087 | orchestrator | 2025-05-14 02:44:36 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:44:36.987564 | orchestrator | 2025-05-14 02:44:36 | INFO  | Task ba538f54-10db-45d5-95d8-a6dc56bdb55a is in state STARTED 2025-05-14 02:44:36.988248 | orchestrator | 2025-05-14 02:44:36 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:44:36.988287 | orchestrator | 2025-05-14 02:44:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:44:40.042955 | orchestrator | 2025-05-14 02:44:40 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:44:40.043680 | orchestrator | 2025-05-14 02:44:40 | INFO  | Task e936af72-2e6f-4257-9861-3799442bf447 is in state STARTED 2025-05-14 02:44:40.045091 | orchestrator | 2025-05-14 02:44:40 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:44:40.046720 | orchestrator | 2025-05-14 02:44:40 | INFO  | Task ba538f54-10db-45d5-95d8-a6dc56bdb55a is in state STARTED 2025-05-14 02:44:40.047991 | orchestrator | 2025-05-14 02:44:40 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:44:40.048038 | orchestrator | 2025-05-14 02:44:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:44:43.092951 | orchestrator | 2025-05-14 02:44:43 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:44:43.094822 | orchestrator | 2025-05-14 02:44:43 | INFO  | Task e936af72-2e6f-4257-9861-3799442bf447 is in state STARTED 2025-05-14 02:44:43.095956 | orchestrator | 2025-05-14 02:44:43 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:44:43.096759 | orchestrator | 2025-05-14 02:44:43 | INFO  | Task ba538f54-10db-45d5-95d8-a6dc56bdb55a is in state STARTED 2025-05-14 02:44:43.097482 | orchestrator | 2025-05-14 02:44:43 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:44:43.097523 | orchestrator | 2025-05-14 02:44:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:44:46.154078 | orchestrator | 2025-05-14 02:44:46 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:44:46.155214 | orchestrator | 2025-05-14 02:44:46 | INFO  | Task e936af72-2e6f-4257-9861-3799442bf447 is in state STARTED 2025-05-14 02:44:46.156210 | orchestrator | 2025-05-14 02:44:46 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:44:46.156843 | orchestrator | 2025-05-14 02:44:46 | INFO  | Task ba538f54-10db-45d5-95d8-a6dc56bdb55a is in state STARTED 2025-05-14 02:44:46.157601 | orchestrator | 2025-05-14 02:44:46 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:44:46.157660 | orchestrator | 2025-05-14 02:44:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:44:49.185200 | orchestrator | 2025-05-14 02:44:49 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:44:49.185320 | orchestrator | 2025-05-14 02:44:49 | INFO  | Task e936af72-2e6f-4257-9861-3799442bf447 is in state STARTED 2025-05-14 02:44:49.187775 | orchestrator | 2025-05-14 02:44:49 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:44:49.187847 | orchestrator | 2025-05-14 02:44:49 | INFO  | Task ba538f54-10db-45d5-95d8-a6dc56bdb55a is in state STARTED 2025-05-14 02:44:49.188225 | orchestrator | 2025-05-14 02:44:49 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:44:49.188261 | orchestrator | 2025-05-14 02:44:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:44:52.233641 | orchestrator | 2025-05-14 02:44:52 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:44:52.234889 | orchestrator | 2025-05-14 02:44:52 | INFO  | Task e936af72-2e6f-4257-9861-3799442bf447 is in state STARTED 2025-05-14 02:44:52.235967 | orchestrator | 2025-05-14 02:44:52 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:44:52.237105 | orchestrator | 2025-05-14 02:44:52 | INFO  | Task ba538f54-10db-45d5-95d8-a6dc56bdb55a is in state STARTED 2025-05-14 02:44:52.238921 | orchestrator | 2025-05-14 02:44:52 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:44:52.238960 | orchestrator | 2025-05-14 02:44:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:44:55.297730 | orchestrator | 2025-05-14 02:44:55 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:44:55.298676 | orchestrator | 2025-05-14 02:44:55 | INFO  | Task e936af72-2e6f-4257-9861-3799442bf447 is in state STARTED 2025-05-14 02:44:55.300087 | orchestrator | 2025-05-14 02:44:55 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:44:55.301585 | orchestrator | 2025-05-14 02:44:55 | INFO  | Task ba538f54-10db-45d5-95d8-a6dc56bdb55a is in state STARTED 2025-05-14 02:44:55.302129 | orchestrator | 2025-05-14 02:44:55 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:44:55.302354 | orchestrator | 2025-05-14 02:44:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:44:58.363238 | orchestrator | 2025-05-14 02:44:58 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:44:58.367119 | orchestrator | 2025-05-14 02:44:58 | INFO  | Task e936af72-2e6f-4257-9861-3799442bf447 is in state STARTED 2025-05-14 02:44:58.369748 | orchestrator | 2025-05-14 02:44:58 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:44:58.372769 | orchestrator | 2025-05-14 02:44:58 | INFO  | Task ba538f54-10db-45d5-95d8-a6dc56bdb55a is in state STARTED 2025-05-14 02:44:58.375730 | orchestrator | 2025-05-14 02:44:58 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:44:58.376272 | orchestrator | 2025-05-14 02:44:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:45:01.434234 | orchestrator | 2025-05-14 02:45:01 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:45:01.436401 | orchestrator | 2025-05-14 02:45:01 | INFO  | Task e936af72-2e6f-4257-9861-3799442bf447 is in state STARTED 2025-05-14 02:45:01.438823 | orchestrator | 2025-05-14 02:45:01 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:45:01.439561 | orchestrator | 2025-05-14 02:45:01 | INFO  | Task ba538f54-10db-45d5-95d8-a6dc56bdb55a is in state STARTED 2025-05-14 02:45:01.441125 | orchestrator | 2025-05-14 02:45:01 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:45:01.441201 | orchestrator | 2025-05-14 02:45:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:45:04.482233 | orchestrator | 2025-05-14 02:45:04 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:45:04.483185 | orchestrator | 2025-05-14 02:45:04 | INFO  | Task e936af72-2e6f-4257-9861-3799442bf447 is in state STARTED 2025-05-14 02:45:04.489923 | orchestrator | 2025-05-14 02:45:04 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:45:04.490654 | orchestrator | 2025-05-14 02:45:04 | INFO  | Task ba538f54-10db-45d5-95d8-a6dc56bdb55a is in state STARTED 2025-05-14 02:45:04.492650 | orchestrator | 2025-05-14 02:45:04 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:45:04.492727 | orchestrator | 2025-05-14 02:45:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:45:07.530322 | orchestrator | 2025-05-14 02:45:07 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:45:07.532034 | orchestrator | 2025-05-14 02:45:07 | INFO  | Task e936af72-2e6f-4257-9861-3799442bf447 is in state SUCCESS 2025-05-14 02:45:07.532090 | orchestrator | 2025-05-14 02:45:07 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:45:07.533023 | orchestrator | 2025-05-14 02:45:07 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:45:07.533061 | orchestrator | 2025-05-14 02:45:07 | INFO  | Task ba538f54-10db-45d5-95d8-a6dc56bdb55a is in state STARTED 2025-05-14 02:45:07.534366 | orchestrator | 2025-05-14 02:45:07 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:45:07.534388 | orchestrator | 2025-05-14 02:45:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:45:10.564527 | orchestrator | 2025-05-14 02:45:10 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:45:10.564624 | orchestrator | 2025-05-14 02:45:10 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:45:10.564931 | orchestrator | 2025-05-14 02:45:10 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:45:10.565484 | orchestrator | 2025-05-14 02:45:10 | INFO  | Task ba538f54-10db-45d5-95d8-a6dc56bdb55a is in state STARTED 2025-05-14 02:45:10.565994 | orchestrator | 2025-05-14 02:45:10 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:45:10.566073 | orchestrator | 2025-05-14 02:45:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:45:13.587324 | orchestrator | 2025-05-14 02:45:13 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:45:13.587624 | orchestrator | 2025-05-14 02:45:13 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:45:13.587809 | orchestrator | 2025-05-14 02:45:13 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:45:13.588664 | orchestrator | 2025-05-14 02:45:13 | INFO  | Task ba538f54-10db-45d5-95d8-a6dc56bdb55a is in state STARTED 2025-05-14 02:45:13.590281 | orchestrator | 2025-05-14 02:45:13 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:45:13.590342 | orchestrator | 2025-05-14 02:45:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:45:16.628041 | orchestrator | 2025-05-14 02:45:16 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:45:16.630099 | orchestrator | 2025-05-14 02:45:16 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:45:16.630156 | orchestrator | 2025-05-14 02:45:16 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:45:16.630930 | orchestrator | 2025-05-14 02:45:16 | INFO  | Task ba538f54-10db-45d5-95d8-a6dc56bdb55a is in state STARTED 2025-05-14 02:45:16.631908 | orchestrator | 2025-05-14 02:45:16 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:45:16.631953 | orchestrator | 2025-05-14 02:45:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:45:19.676251 | orchestrator | 2025-05-14 02:45:19 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:45:19.677960 | orchestrator | 2025-05-14 02:45:19 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:45:19.679524 | orchestrator | 2025-05-14 02:45:19 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:45:19.682504 | orchestrator | 2025-05-14 02:45:19 | INFO  | Task ba538f54-10db-45d5-95d8-a6dc56bdb55a is in state STARTED 2025-05-14 02:45:19.683618 | orchestrator | 2025-05-14 02:45:19 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:45:19.683654 | orchestrator | 2025-05-14 02:45:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:45:22.719856 | orchestrator | 2025-05-14 02:45:22 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:45:22.721082 | orchestrator | 2025-05-14 02:45:22 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:45:22.723119 | orchestrator | 2025-05-14 02:45:22 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:45:22.725273 | orchestrator | 2025-05-14 02:45:22 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:45:22.727580 | orchestrator | 2025-05-14 02:45:22 | INFO  | Task ba538f54-10db-45d5-95d8-a6dc56bdb55a is in state SUCCESS 2025-05-14 02:45:22.729503 | orchestrator | 2025-05-14 02:45:22.729582 | orchestrator | 2025-05-14 02:45:22.729602 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:45:22.729621 | orchestrator | 2025-05-14 02:45:22.729639 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:45:22.729656 | orchestrator | Wednesday 14 May 2025 02:44:26 +0000 (0:00:00.792) 0:00:00.792 ********* 2025-05-14 02:45:22.729672 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:45:22.729689 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:45:22.729703 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:45:22.729713 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:45:22.729723 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:45:22.729760 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:45:22.729771 | orchestrator | ok: [testbed-manager] 2025-05-14 02:45:22.729781 | orchestrator | 2025-05-14 02:45:22.729791 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:45:22.729800 | orchestrator | Wednesday 14 May 2025 02:44:27 +0000 (0:00:01.541) 0:00:02.333 ********* 2025-05-14 02:45:22.729810 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-05-14 02:45:22.729820 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-05-14 02:45:22.729829 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-05-14 02:45:22.729839 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-05-14 02:45:22.729849 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-05-14 02:45:22.729858 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-05-14 02:45:22.729868 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-05-14 02:45:22.729878 | orchestrator | 2025-05-14 02:45:22.729887 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-05-14 02:45:22.729897 | orchestrator | 2025-05-14 02:45:22.729906 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-05-14 02:45:22.729916 | orchestrator | Wednesday 14 May 2025 02:44:29 +0000 (0:00:01.963) 0:00:04.296 ********* 2025-05-14 02:45:22.729926 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-05-14 02:45:22.729937 | orchestrator | 2025-05-14 02:45:22.729947 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-05-14 02:45:22.729957 | orchestrator | Wednesday 14 May 2025 02:44:32 +0000 (0:00:02.954) 0:00:07.251 ********* 2025-05-14 02:45:22.729967 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-05-14 02:45:22.729976 | orchestrator | 2025-05-14 02:45:22.729986 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-05-14 02:45:22.729997 | orchestrator | Wednesday 14 May 2025 02:44:36 +0000 (0:00:04.103) 0:00:11.355 ********* 2025-05-14 02:45:22.730009 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-05-14 02:45:22.730083 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-05-14 02:45:22.730095 | orchestrator | 2025-05-14 02:45:22.730106 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-05-14 02:45:22.730117 | orchestrator | Wednesday 14 May 2025 02:44:43 +0000 (0:00:07.060) 0:00:18.415 ********* 2025-05-14 02:45:22.730128 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-14 02:45:22.730140 | orchestrator | 2025-05-14 02:45:22.730151 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-05-14 02:45:22.730175 | orchestrator | Wednesday 14 May 2025 02:44:47 +0000 (0:00:03.696) 0:00:22.112 ********* 2025-05-14 02:45:22.730186 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-14 02:45:22.730195 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-05-14 02:45:22.730205 | orchestrator | 2025-05-14 02:45:22.730223 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-05-14 02:45:22.730232 | orchestrator | Wednesday 14 May 2025 02:44:51 +0000 (0:00:03.890) 0:00:26.003 ********* 2025-05-14 02:45:22.730242 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-14 02:45:22.730252 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-05-14 02:45:22.730261 | orchestrator | 2025-05-14 02:45:22.730271 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-05-14 02:45:22.730281 | orchestrator | Wednesday 14 May 2025 02:44:58 +0000 (0:00:07.022) 0:00:33.025 ********* 2025-05-14 02:45:22.730290 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-05-14 02:45:22.730308 | orchestrator | 2025-05-14 02:45:22.730318 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:45:22.730327 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:45:22.730338 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:45:22.730348 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:45:22.730358 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:45:22.730367 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:45:22.730391 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:45:22.730402 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:45:22.730449 | orchestrator | 2025-05-14 02:45:22.730459 | orchestrator | 2025-05-14 02:45:22.730469 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:45:22.730479 | orchestrator | Wednesday 14 May 2025 02:45:04 +0000 (0:00:05.985) 0:00:39.011 ********* 2025-05-14 02:45:22.730489 | orchestrator | =============================================================================== 2025-05-14 02:45:22.730498 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 7.06s 2025-05-14 02:45:22.730507 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 7.02s 2025-05-14 02:45:22.730517 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.99s 2025-05-14 02:45:22.730526 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 4.10s 2025-05-14 02:45:22.730536 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.89s 2025-05-14 02:45:22.730545 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.70s 2025-05-14 02:45:22.730555 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 2.95s 2025-05-14 02:45:22.730565 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.96s 2025-05-14 02:45:22.730574 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.54s 2025-05-14 02:45:22.730584 | orchestrator | 2025-05-14 02:45:22.730593 | orchestrator | 2025-05-14 02:45:22.730603 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:45:22.730612 | orchestrator | 2025-05-14 02:45:22.730622 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:45:22.730631 | orchestrator | Wednesday 14 May 2025 02:43:14 +0000 (0:00:00.315) 0:00:00.315 ********* 2025-05-14 02:45:22.730641 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:45:22.730651 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:45:22.730660 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:45:22.730670 | orchestrator | 2025-05-14 02:45:22.730680 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:45:22.730689 | orchestrator | Wednesday 14 May 2025 02:43:14 +0000 (0:00:00.619) 0:00:00.934 ********* 2025-05-14 02:45:22.730699 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-05-14 02:45:22.730708 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-05-14 02:45:22.730718 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-05-14 02:45:22.730738 | orchestrator | 2025-05-14 02:45:22.730748 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-05-14 02:45:22.730758 | orchestrator | 2025-05-14 02:45:22.730767 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-14 02:45:22.730784 | orchestrator | Wednesday 14 May 2025 02:43:15 +0000 (0:00:00.840) 0:00:01.774 ********* 2025-05-14 02:45:22.730794 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:45:22.730804 | orchestrator | 2025-05-14 02:45:22.730813 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-05-14 02:45:22.730823 | orchestrator | Wednesday 14 May 2025 02:43:16 +0000 (0:00:01.373) 0:00:03.148 ********* 2025-05-14 02:45:22.730833 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-05-14 02:45:22.730843 | orchestrator | 2025-05-14 02:45:22.730852 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-05-14 02:45:22.730867 | orchestrator | Wednesday 14 May 2025 02:43:20 +0000 (0:00:03.709) 0:00:06.858 ********* 2025-05-14 02:45:22.730877 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-05-14 02:45:22.730887 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-05-14 02:45:22.730897 | orchestrator | 2025-05-14 02:45:22.730906 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-05-14 02:45:22.730916 | orchestrator | Wednesday 14 May 2025 02:43:27 +0000 (0:00:06.732) 0:00:13.590 ********* 2025-05-14 02:45:22.730925 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-14 02:45:22.730935 | orchestrator | 2025-05-14 02:45:22.730944 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-05-14 02:45:22.730954 | orchestrator | Wednesday 14 May 2025 02:43:31 +0000 (0:00:03.763) 0:00:17.354 ********* 2025-05-14 02:45:22.730964 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-14 02:45:22.730973 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-05-14 02:45:22.730983 | orchestrator | 2025-05-14 02:45:22.730993 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-05-14 02:45:22.731003 | orchestrator | Wednesday 14 May 2025 02:43:35 +0000 (0:00:04.224) 0:00:21.578 ********* 2025-05-14 02:45:22.731012 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-14 02:45:22.731023 | orchestrator | 2025-05-14 02:45:22.731032 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-05-14 02:45:22.731042 | orchestrator | Wednesday 14 May 2025 02:43:38 +0000 (0:00:03.423) 0:00:25.001 ********* 2025-05-14 02:45:22.731051 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-05-14 02:45:22.731061 | orchestrator | 2025-05-14 02:45:22.731071 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-05-14 02:45:22.731080 | orchestrator | Wednesday 14 May 2025 02:43:43 +0000 (0:00:04.621) 0:00:29.623 ********* 2025-05-14 02:45:22.731090 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:45:22.731099 | orchestrator | 2025-05-14 02:45:22.731109 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-05-14 02:45:22.731129 | orchestrator | Wednesday 14 May 2025 02:43:46 +0000 (0:00:03.287) 0:00:32.911 ********* 2025-05-14 02:45:22.731138 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:45:22.731148 | orchestrator | 2025-05-14 02:45:22.731158 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-05-14 02:45:22.731167 | orchestrator | Wednesday 14 May 2025 02:43:51 +0000 (0:00:04.501) 0:00:37.412 ********* 2025-05-14 02:45:22.731177 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:45:22.731186 | orchestrator | 2025-05-14 02:45:22.731196 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-05-14 02:45:22.731206 | orchestrator | Wednesday 14 May 2025 02:43:55 +0000 (0:00:04.029) 0:00:41.441 ********* 2025-05-14 02:45:22.731220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 02:45:22.731240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 02:45:22.731256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 02:45:22.731268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:45:22.731286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:45:22.731304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:45:22.731314 | orchestrator | 2025-05-14 02:45:22.731324 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-05-14 02:45:22.731334 | orchestrator | Wednesday 14 May 2025 02:43:57 +0000 (0:00:01.883) 0:00:43.325 ********* 2025-05-14 02:45:22.731344 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:22.731354 | orchestrator | 2025-05-14 02:45:22.731364 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-05-14 02:45:22.731373 | orchestrator | Wednesday 14 May 2025 02:43:57 +0000 (0:00:00.121) 0:00:43.446 ********* 2025-05-14 02:45:22.731383 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:22.731393 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:22.731403 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:22.731435 | orchestrator | 2025-05-14 02:45:22.731445 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-05-14 02:45:22.731455 | orchestrator | Wednesday 14 May 2025 02:43:57 +0000 (0:00:00.358) 0:00:43.804 ********* 2025-05-14 02:45:22.731464 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-14 02:45:22.731474 | orchestrator | 2025-05-14 02:45:22.731483 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-05-14 02:45:22.731493 | orchestrator | Wednesday 14 May 2025 02:43:58 +0000 (0:00:00.632) 0:00:44.437 ********* 2025-05-14 02:45:22.731508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-14 02:45:22.731519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:45:22.731530 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:22.731549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-14 02:45:22.731566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:45:22.731577 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:22.731587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-14 02:45:22.731603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:45:22.731614 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:22.731624 | orchestrator | 2025-05-14 02:45:22.731633 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-05-14 02:45:22.731643 | orchestrator | Wednesday 14 May 2025 02:43:59 +0000 (0:00:01.484) 0:00:45.921 ********* 2025-05-14 02:45:22.731653 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:22.731663 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:22.731673 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:22.731683 | orchestrator | 2025-05-14 02:45:22.731693 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-14 02:45:22.731703 | orchestrator | Wednesday 14 May 2025 02:44:00 +0000 (0:00:00.396) 0:00:46.317 ********* 2025-05-14 02:45:22.731713 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:45:22.731730 | orchestrator | 2025-05-14 02:45:22.731740 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-05-14 02:45:22.731750 | orchestrator | Wednesday 14 May 2025 02:44:01 +0000 (0:00:00.980) 0:00:47.298 ********* 2025-05-14 02:45:22.731767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 02:45:22.731778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 02:45:22.731790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 02:45:22.731805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:45:22.731816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:45:22.731841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:45:22.731853 | orchestrator | 2025-05-14 02:45:22.731863 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-05-14 02:45:22.731872 | orchestrator | Wednesday 14 May 2025 02:44:04 +0000 (0:00:03.101) 0:00:50.399 ********* 2025-05-14 02:45:22.731883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-14 02:45:22.731893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:45:22.731904 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:22.731924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-14 02:45:22.731948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:45:22.731959 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:22.731970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-14 02:45:22.731980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:45:22.731990 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:22.732000 | orchestrator | 2025-05-14 02:45:22.732010 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-05-14 02:45:22.732020 | orchestrator | Wednesday 14 May 2025 02:44:05 +0000 (0:00:01.433) 0:00:51.833 ********* 2025-05-14 02:45:22.732034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-14 02:45:22.732053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:45:22.732063 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:22.732080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-14 02:45:22.732092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:45:22.732102 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:22.732112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-14 02:45:22.732127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:45:22.732144 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:22.732154 | orchestrator | 2025-05-14 02:45:22.732164 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-05-14 02:45:22.732174 | orchestrator | Wednesday 14 May 2025 02:44:07 +0000 (0:00:01.608) 0:00:53.441 ********* 2025-05-14 02:45:22.732184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 02:45:22.732201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 02:45:22.732212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 02:45:22.732228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:45:22.732245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:45:22.732256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:45:22.732266 | orchestrator | 2025-05-14 02:45:22.732280 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-05-14 02:45:22.732291 | orchestrator | Wednesday 14 May 2025 02:44:10 +0000 (0:00:03.413) 0:00:56.854 ********* 2025-05-14 02:45:22.732301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 02:45:22.732311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 02:45:22.732326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 02:45:22.732343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:45:22.732359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:45:22.732370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:45:22.732380 | orchestrator | 2025-05-14 02:45:22.732389 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-05-14 02:45:22.732399 | orchestrator | Wednesday 14 May 2025 02:44:19 +0000 (0:00:09.173) 0:01:06.028 ********* 2025-05-14 02:45:22.732470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-14 02:45:22.732498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:45:22.732509 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:22.732520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-14 02:45:22.732538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:45:22.732550 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:22.732560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-14 02:45:22.732571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:45:22.732589 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:22.732600 | orchestrator | 2025-05-14 02:45:22.732610 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-05-14 02:45:22.732621 | orchestrator | Wednesday 14 May 2025 02:44:21 +0000 (0:00:02.110) 0:01:08.139 ********* 2025-05-14 02:45:22.732636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 02:45:22.732655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 02:45:22.732666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-14 02:45:22.732678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:45:22.732695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:45:22.732714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:45:22.732726 | orchestrator | 2025-05-14 02:45:22.732736 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-14 02:45:22.732746 | orchestrator | Wednesday 14 May 2025 02:44:25 +0000 (0:00:03.727) 0:01:11.866 ********* 2025-05-14 02:45:22.732757 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:22.732767 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:22.732778 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:22.732788 | orchestrator | 2025-05-14 02:45:22.732798 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-05-14 02:45:22.732809 | orchestrator | Wednesday 14 May 2025 02:44:26 +0000 (0:00:00.529) 0:01:12.396 ********* 2025-05-14 02:45:22.732819 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:45:22.732829 | orchestrator | 2025-05-14 02:45:22.732840 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-05-14 02:45:22.732850 | orchestrator | Wednesday 14 May 2025 02:44:29 +0000 (0:00:03.264) 0:01:15.660 ********* 2025-05-14 02:45:22.732860 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:45:22.732870 | orchestrator | 2025-05-14 02:45:22.732880 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-05-14 02:45:22.732891 | orchestrator | Wednesday 14 May 2025 02:44:32 +0000 (0:00:02.729) 0:01:18.390 ********* 2025-05-14 02:45:22.732901 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:45:22.732912 | orchestrator | 2025-05-14 02:45:22.732929 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-14 02:45:22.732939 | orchestrator | Wednesday 14 May 2025 02:44:46 +0000 (0:00:14.697) 0:01:33.087 ********* 2025-05-14 02:45:22.732948 | orchestrator | 2025-05-14 02:45:22.732956 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-14 02:45:22.732965 | orchestrator | Wednesday 14 May 2025 02:44:47 +0000 (0:00:00.130) 0:01:33.218 ********* 2025-05-14 02:45:22.732973 | orchestrator | 2025-05-14 02:45:22.732982 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-14 02:45:22.732990 | orchestrator | Wednesday 14 May 2025 02:44:47 +0000 (0:00:00.183) 0:01:33.402 ********* 2025-05-14 02:45:22.732999 | orchestrator | 2025-05-14 02:45:22.733008 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-05-14 02:45:22.733016 | orchestrator | Wednesday 14 May 2025 02:44:47 +0000 (0:00:00.081) 0:01:33.484 ********* 2025-05-14 02:45:22.733031 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:45:22.733040 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:45:22.733049 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:45:22.733058 | orchestrator | 2025-05-14 02:45:22.733066 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-05-14 02:45:22.733075 | orchestrator | Wednesday 14 May 2025 02:45:05 +0000 (0:00:17.903) 0:01:51.387 ********* 2025-05-14 02:45:22.733083 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:45:22.733092 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:45:22.733100 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:45:22.733109 | orchestrator | 2025-05-14 02:45:22.733117 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:45:22.733126 | orchestrator | testbed-node-0 : ok=24  changed=17  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-14 02:45:22.733135 | orchestrator | testbed-node-1 : ok=11  changed=7  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-14 02:45:22.733143 | orchestrator | testbed-node-2 : ok=11  changed=7  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-14 02:45:22.733152 | orchestrator | 2025-05-14 02:45:22.733160 | orchestrator | 2025-05-14 02:45:22.733169 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:45:22.733178 | orchestrator | Wednesday 14 May 2025 02:45:20 +0000 (0:00:15.373) 0:02:06.761 ********* 2025-05-14 02:45:22.733186 | orchestrator | =============================================================================== 2025-05-14 02:45:22.733195 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 17.90s 2025-05-14 02:45:22.733203 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 15.37s 2025-05-14 02:45:22.733212 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 14.70s 2025-05-14 02:45:22.733221 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 9.17s 2025-05-14 02:45:22.733229 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.73s 2025-05-14 02:45:22.733238 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.62s 2025-05-14 02:45:22.733247 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.50s 2025-05-14 02:45:22.733255 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.22s 2025-05-14 02:45:22.733268 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 4.03s 2025-05-14 02:45:22.733276 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.76s 2025-05-14 02:45:22.733285 | orchestrator | magnum : Check magnum containers ---------------------------------------- 3.73s 2025-05-14 02:45:22.733293 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.71s 2025-05-14 02:45:22.733301 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.42s 2025-05-14 02:45:22.733310 | orchestrator | magnum : Copying over config.json files for services -------------------- 3.41s 2025-05-14 02:45:22.733319 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.29s 2025-05-14 02:45:22.733327 | orchestrator | magnum : Creating Magnum database --------------------------------------- 3.27s 2025-05-14 02:45:22.733335 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 3.10s 2025-05-14 02:45:22.733344 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.73s 2025-05-14 02:45:22.733352 | orchestrator | magnum : Copying over existing policy file ------------------------------ 2.11s 2025-05-14 02:45:22.733360 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.88s 2025-05-14 02:45:22.733369 | orchestrator | 2025-05-14 02:45:22 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:45:22.733383 | orchestrator | 2025-05-14 02:45:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:45:25.773760 | orchestrator | 2025-05-14 02:45:25 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:45:25.773852 | orchestrator | 2025-05-14 02:45:25 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:45:25.773910 | orchestrator | 2025-05-14 02:45:25 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:45:25.774527 | orchestrator | 2025-05-14 02:45:25 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:45:25.776049 | orchestrator | 2025-05-14 02:45:25 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:45:25.776106 | orchestrator | 2025-05-14 02:45:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:45:28.815933 | orchestrator | 2025-05-14 02:45:28 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:45:28.816836 | orchestrator | 2025-05-14 02:45:28 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:45:28.816866 | orchestrator | 2025-05-14 02:45:28 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:45:28.817154 | orchestrator | 2025-05-14 02:45:28 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:45:28.818236 | orchestrator | 2025-05-14 02:45:28 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:45:28.818266 | orchestrator | 2025-05-14 02:45:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:45:31.856945 | orchestrator | 2025-05-14 02:45:31 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:45:31.857150 | orchestrator | 2025-05-14 02:45:31 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:45:31.858104 | orchestrator | 2025-05-14 02:45:31 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:45:31.858767 | orchestrator | 2025-05-14 02:45:31 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:45:31.860210 | orchestrator | 2025-05-14 02:45:31 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:45:31.861496 | orchestrator | 2025-05-14 02:45:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:45:34.895578 | orchestrator | 2025-05-14 02:45:34 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:45:34.895687 | orchestrator | 2025-05-14 02:45:34 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:45:34.897103 | orchestrator | 2025-05-14 02:45:34 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:45:34.898666 | orchestrator | 2025-05-14 02:45:34 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:45:34.903224 | orchestrator | 2025-05-14 02:45:34 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:45:34.903276 | orchestrator | 2025-05-14 02:45:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:45:37.951815 | orchestrator | 2025-05-14 02:45:37 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:45:37.952379 | orchestrator | 2025-05-14 02:45:37 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:45:37.952778 | orchestrator | 2025-05-14 02:45:37 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:45:37.953518 | orchestrator | 2025-05-14 02:45:37 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:45:37.954278 | orchestrator | 2025-05-14 02:45:37 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:45:37.954386 | orchestrator | 2025-05-14 02:45:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:45:40.993731 | orchestrator | 2025-05-14 02:45:40 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:45:40.994380 | orchestrator | 2025-05-14 02:45:40 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:45:40.996237 | orchestrator | 2025-05-14 02:45:40 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:45:40.996899 | orchestrator | 2025-05-14 02:45:40 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:45:40.997459 | orchestrator | 2025-05-14 02:45:40 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:45:40.997488 | orchestrator | 2025-05-14 02:45:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:45:44.033653 | orchestrator | 2025-05-14 02:45:44 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:45:44.033745 | orchestrator | 2025-05-14 02:45:44 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:45:44.034161 | orchestrator | 2025-05-14 02:45:44 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:45:44.034824 | orchestrator | 2025-05-14 02:45:44 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:45:44.035378 | orchestrator | 2025-05-14 02:45:44 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:45:44.035431 | orchestrator | 2025-05-14 02:45:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:45:47.078739 | orchestrator | 2025-05-14 02:45:47 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:45:47.078861 | orchestrator | 2025-05-14 02:45:47 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:45:47.079837 | orchestrator | 2025-05-14 02:45:47 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:45:47.082511 | orchestrator | 2025-05-14 02:45:47 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:45:47.082773 | orchestrator | 2025-05-14 02:45:47 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:45:47.085639 | orchestrator | 2025-05-14 02:45:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:45:50.117564 | orchestrator | 2025-05-14 02:45:50 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:45:50.119371 | orchestrator | 2025-05-14 02:45:50 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:45:50.119698 | orchestrator | 2025-05-14 02:45:50 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:45:50.120456 | orchestrator | 2025-05-14 02:45:50 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:45:50.121060 | orchestrator | 2025-05-14 02:45:50 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:45:50.121112 | orchestrator | 2025-05-14 02:45:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:45:53.156896 | orchestrator | 2025-05-14 02:45:53 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:45:53.157774 | orchestrator | 2025-05-14 02:45:53 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:45:53.158675 | orchestrator | 2025-05-14 02:45:53 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:45:53.160055 | orchestrator | 2025-05-14 02:45:53 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:45:53.160934 | orchestrator | 2025-05-14 02:45:53 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state STARTED 2025-05-14 02:45:53.160967 | orchestrator | 2025-05-14 02:45:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:45:56.197003 | orchestrator | 2025-05-14 02:45:56 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:45:56.198149 | orchestrator | 2025-05-14 02:45:56 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:45:56.198815 | orchestrator | 2025-05-14 02:45:56 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:45:56.199854 | orchestrator | 2025-05-14 02:45:56 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:45:56.202710 | orchestrator | 2025-05-14 02:45:56 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:45:56.209978 | orchestrator | 2025-05-14 02:45:56 | INFO  | Task 12c1a633-5622-40ca-800b-e18a73c69252 is in state SUCCESS 2025-05-14 02:45:56.211270 | orchestrator | 2025-05-14 02:45:56.211317 | orchestrator | 2025-05-14 02:45:56.211326 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:45:56.211334 | orchestrator | 2025-05-14 02:45:56.211341 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:45:56.211347 | orchestrator | Wednesday 14 May 2025 02:41:04 +0000 (0:00:00.527) 0:00:00.527 ********* 2025-05-14 02:45:56.211353 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:45:56.211360 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:45:56.211366 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:45:56.211392 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:45:56.211399 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:45:56.211406 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:45:56.211412 | orchestrator | 2025-05-14 02:45:56.211418 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:45:56.211424 | orchestrator | Wednesday 14 May 2025 02:41:06 +0000 (0:00:01.181) 0:00:01.708 ********* 2025-05-14 02:45:56.211430 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-05-14 02:45:56.211437 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-05-14 02:45:56.211452 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-05-14 02:45:56.211459 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-05-14 02:45:56.211472 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-05-14 02:45:56.211478 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-05-14 02:45:56.211485 | orchestrator | 2025-05-14 02:45:56.211491 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-05-14 02:45:56.211497 | orchestrator | 2025-05-14 02:45:56.211503 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-14 02:45:56.211509 | orchestrator | Wednesday 14 May 2025 02:41:06 +0000 (0:00:00.719) 0:00:02.428 ********* 2025-05-14 02:45:56.211516 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:45:56.211523 | orchestrator | 2025-05-14 02:45:56.211562 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-05-14 02:45:56.211568 | orchestrator | Wednesday 14 May 2025 02:41:07 +0000 (0:00:01.184) 0:00:03.613 ********* 2025-05-14 02:45:56.211574 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:45:56.211580 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:45:56.211586 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:45:56.211622 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:45:56.211629 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:45:56.211635 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:45:56.211642 | orchestrator | 2025-05-14 02:45:56.211648 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-05-14 02:45:56.211654 | orchestrator | Wednesday 14 May 2025 02:41:09 +0000 (0:00:01.183) 0:00:04.797 ********* 2025-05-14 02:45:56.211660 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:45:56.211665 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:45:56.211671 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:45:56.211677 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:45:56.211683 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:45:56.211689 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:45:56.211695 | orchestrator | 2025-05-14 02:45:56.211701 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-05-14 02:45:56.211747 | orchestrator | Wednesday 14 May 2025 02:41:10 +0000 (0:00:01.011) 0:00:05.809 ********* 2025-05-14 02:45:56.211754 | orchestrator | ok: [testbed-node-0] => { 2025-05-14 02:45:56.211761 | orchestrator |  "changed": false, 2025-05-14 02:45:56.211767 | orchestrator |  "msg": "All assertions passed" 2025-05-14 02:45:56.211774 | orchestrator | } 2025-05-14 02:45:56.211779 | orchestrator | ok: [testbed-node-1] => { 2025-05-14 02:45:56.211785 | orchestrator |  "changed": false, 2025-05-14 02:45:56.211791 | orchestrator |  "msg": "All assertions passed" 2025-05-14 02:45:56.211797 | orchestrator | } 2025-05-14 02:45:56.211803 | orchestrator | ok: [testbed-node-2] => { 2025-05-14 02:45:56.211810 | orchestrator |  "changed": false, 2025-05-14 02:45:56.211816 | orchestrator |  "msg": "All assertions passed" 2025-05-14 02:45:56.211822 | orchestrator | } 2025-05-14 02:45:56.211828 | orchestrator | ok: [testbed-node-3] => { 2025-05-14 02:45:56.211834 | orchestrator |  "changed": false, 2025-05-14 02:45:56.211839 | orchestrator |  "msg": "All assertions passed" 2025-05-14 02:45:56.211845 | orchestrator | } 2025-05-14 02:45:56.211851 | orchestrator | ok: [testbed-node-4] => { 2025-05-14 02:45:56.211857 | orchestrator |  "changed": false, 2025-05-14 02:45:56.211863 | orchestrator |  "msg": "All assertions passed" 2025-05-14 02:45:56.211869 | orchestrator | } 2025-05-14 02:45:56.211874 | orchestrator | ok: [testbed-node-5] => { 2025-05-14 02:45:56.211880 | orchestrator |  "changed": false, 2025-05-14 02:45:56.211886 | orchestrator |  "msg": "All assertions passed" 2025-05-14 02:45:56.211891 | orchestrator | } 2025-05-14 02:45:56.211898 | orchestrator | 2025-05-14 02:45:56.211904 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-05-14 02:45:56.211910 | orchestrator | Wednesday 14 May 2025 02:41:10 +0000 (0:00:00.636) 0:00:06.445 ********* 2025-05-14 02:45:56.211917 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:56.211924 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:56.211930 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:56.211937 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:56.211943 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:56.211950 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:56.211956 | orchestrator | 2025-05-14 02:45:56.211976 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-05-14 02:45:56.211983 | orchestrator | Wednesday 14 May 2025 02:41:11 +0000 (0:00:00.733) 0:00:07.178 ********* 2025-05-14 02:45:56.211989 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-05-14 02:45:56.211995 | orchestrator | 2025-05-14 02:45:56.212002 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-05-14 02:45:56.212008 | orchestrator | Wednesday 14 May 2025 02:41:14 +0000 (0:00:03.349) 0:00:10.527 ********* 2025-05-14 02:45:56.212015 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-05-14 02:45:56.212022 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-05-14 02:45:56.212028 | orchestrator | 2025-05-14 02:45:56.212056 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-05-14 02:45:56.212063 | orchestrator | Wednesday 14 May 2025 02:41:21 +0000 (0:00:06.298) 0:00:16.826 ********* 2025-05-14 02:45:56.212069 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-14 02:45:56.212075 | orchestrator | 2025-05-14 02:45:56.212081 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-05-14 02:45:56.212087 | orchestrator | Wednesday 14 May 2025 02:41:24 +0000 (0:00:03.246) 0:00:20.073 ********* 2025-05-14 02:45:56.212093 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-14 02:45:56.212099 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-05-14 02:45:56.212105 | orchestrator | 2025-05-14 02:45:56.212111 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-05-14 02:45:56.212117 | orchestrator | Wednesday 14 May 2025 02:41:28 +0000 (0:00:04.088) 0:00:24.162 ********* 2025-05-14 02:45:56.212123 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-14 02:45:56.212129 | orchestrator | 2025-05-14 02:45:56.212135 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-05-14 02:45:56.212141 | orchestrator | Wednesday 14 May 2025 02:41:32 +0000 (0:00:03.593) 0:00:27.756 ********* 2025-05-14 02:45:56.212147 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-05-14 02:45:56.212153 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-05-14 02:45:56.212158 | orchestrator | 2025-05-14 02:45:56.212164 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-14 02:45:56.212170 | orchestrator | Wednesday 14 May 2025 02:41:40 +0000 (0:00:08.513) 0:00:36.269 ********* 2025-05-14 02:45:56.212176 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:56.212183 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:56.212188 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:56.212194 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:56.212200 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:56.212206 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:56.212212 | orchestrator | 2025-05-14 02:45:56.212218 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-05-14 02:45:56.212224 | orchestrator | Wednesday 14 May 2025 02:41:41 +0000 (0:00:00.732) 0:00:37.001 ********* 2025-05-14 02:45:56.212230 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:56.212236 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:56.212242 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:56.212247 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:56.212253 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:56.212259 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:56.212265 | orchestrator | 2025-05-14 02:45:56.212271 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-05-14 02:45:56.212277 | orchestrator | Wednesday 14 May 2025 02:41:44 +0000 (0:00:03.432) 0:00:40.433 ********* 2025-05-14 02:45:56.212283 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:45:56.212289 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:45:56.212295 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:45:56.212301 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:45:56.212307 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:45:56.212313 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:45:56.212319 | orchestrator | 2025-05-14 02:45:56.212325 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-05-14 02:45:56.212330 | orchestrator | Wednesday 14 May 2025 02:41:45 +0000 (0:00:01.194) 0:00:41.627 ********* 2025-05-14 02:45:56.212336 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:56.212342 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:56.212348 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:56.212354 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:56.212360 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:56.212366 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:56.212425 | orchestrator | 2025-05-14 02:45:56.212433 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-05-14 02:45:56.212439 | orchestrator | Wednesday 14 May 2025 02:41:48 +0000 (0:00:02.721) 0:00:44.349 ********* 2025-05-14 02:45:56.212454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:45:56.212471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.212479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.212487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.212494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:45:56.212505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.212519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.212526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.212532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.212540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.212548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.212561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.212571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.212582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.212588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.212596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.212606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.212613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.212622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.212633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.212641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.212651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.212664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.212670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.212684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.212695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.212701 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.212708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.212718 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.212725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.212734 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.212745 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.212751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.212757 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.212767 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.212773 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.212782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.213021 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.213032 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.213039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.213052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:45:56.213059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.213073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.213080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.213085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.213095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.213101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.213110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.213120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.213127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.213134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.213144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.213151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.213158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.213167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.213178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.213186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.213198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.213205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.213212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.213225 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.213232 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.213244 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.213248 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.213252 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.213259 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.213267 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.213271 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.213280 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.213284 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-14 02:45:56.213288 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.213292 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.213298 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.213305 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.213313 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.213317 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.213321 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.213328 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.213334 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.213343 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.213347 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.213351 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.213355 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.213359 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.213819 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.213849 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.213887 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-14 02:45:56.213895 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.213902 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.213908 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.213920 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.213938 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.213946 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.213953 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.213959 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-14 02:45:56.213968 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.213982 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.213988 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.213995 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.214001 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.214008 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.214064 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.214080 | orchestrator | 2025-05-14 02:45:56.214086 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-05-14 02:45:56.214093 | orchestrator | Wednesday 14 May 2025 02:41:51 +0000 (0:00:02.660) 0:00:47.010 ********* 2025-05-14 02:45:56.214099 | orchestrator | [WARNING]: Skipped 2025-05-14 02:45:56.214106 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-05-14 02:45:56.214114 | orchestrator | due to this access issue: 2025-05-14 02:45:56.214124 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-05-14 02:45:56.214131 | orchestrator | a directory 2025-05-14 02:45:56.214137 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-14 02:45:56.214143 | orchestrator | 2025-05-14 02:45:56.214149 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-14 02:45:56.214155 | orchestrator | Wednesday 14 May 2025 02:41:51 +0000 (0:00:00.625) 0:00:47.635 ********* 2025-05-14 02:45:56.214162 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:45:56.214169 | orchestrator | 2025-05-14 02:45:56.214175 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-05-14 02:45:56.214182 | orchestrator | Wednesday 14 May 2025 02:41:53 +0000 (0:00:01.449) 0:00:49.084 ********* 2025-05-14 02:45:56.214189 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-14 02:45:56.214196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:45:56.214203 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-14 02:45:56.214219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:45:56.214230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:45:56.214237 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-14 02:45:56.214243 | orchestrator | 2025-05-14 02:45:56.214248 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-05-14 02:45:56.214254 | orchestrator | Wednesday 14 May 2025 02:41:58 +0000 (0:00:04.695) 0:00:53.780 ********* 2025-05-14 02:45:56.214260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.214267 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:56.214279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.214290 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:56.214299 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.214306 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:56.214313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.214319 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:56.214325 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.214331 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:56.214338 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.214349 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:56.214356 | orchestrator | 2025-05-14 02:45:56.214362 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-05-14 02:45:56.214368 | orchestrator | Wednesday 14 May 2025 02:42:03 +0000 (0:00:05.626) 0:00:59.406 ********* 2025-05-14 02:45:56.214420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.214427 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:56.214438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.214444 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:56.214450 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.214457 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:56.214463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.214475 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:56.214481 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.214487 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:56.214496 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.214503 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:56.214509 | orchestrator | 2025-05-14 02:45:56.214517 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-05-14 02:45:56.214524 | orchestrator | Wednesday 14 May 2025 02:42:08 +0000 (0:00:04.636) 0:01:04.043 ********* 2025-05-14 02:45:56.214530 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:56.214536 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:56.214542 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:56.214549 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:56.214555 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:56.214561 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:56.214567 | orchestrator | 2025-05-14 02:45:56.214573 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-05-14 02:45:56.214580 | orchestrator | Wednesday 14 May 2025 02:42:13 +0000 (0:00:04.686) 0:01:08.729 ********* 2025-05-14 02:45:56.214586 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:56.214592 | orchestrator | 2025-05-14 02:45:56.214598 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-05-14 02:45:56.214604 | orchestrator | Wednesday 14 May 2025 02:42:13 +0000 (0:00:00.124) 0:01:08.853 ********* 2025-05-14 02:45:56.214611 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:56.214617 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:56.214623 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:56.214629 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:56.214635 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:56.214641 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:56.214647 | orchestrator | 2025-05-14 02:45:56.214653 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-05-14 02:45:56.214660 | orchestrator | Wednesday 14 May 2025 02:42:14 +0000 (0:00:01.380) 0:01:10.234 ********* 2025-05-14 02:45:56.214666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.214677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.214687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.214793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.214802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.214808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.214820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.214827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.214834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.214843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.214853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.214860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.214870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.214877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.214884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.214894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.214904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.214910 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:56.214917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.214927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.214934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.214943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.214952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.214959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.214968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.214975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.214981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.214992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.215004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.215011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.215022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.215028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.215034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.215043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.215047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.215054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.215058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.215062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.215066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.215070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.215079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.215086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.215090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.215094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.215098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.215105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.215109 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:56.215116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.215125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.215130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.215134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.215285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.215295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.215299 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:56.215303 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.215311 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.215316 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.215328 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.215337 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.215348 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.215354 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.215360 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.215369 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.215393 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.215403 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.215414 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.215420 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.215426 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.215432 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.215441 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.215448 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.215465 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.215471 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.215477 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.215485 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.215492 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.215498 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.215515 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.215522 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.215529 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.215536 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.215543 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.215554 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.215568 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.215576 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.215580 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.215584 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.215588 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:56.215595 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.215602 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:56.215609 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.215613 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.215617 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.215621 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.215626 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.215636 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.215642 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.215646 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.215650 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.215654 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.215660 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.216041 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.216073 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.216085 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.216092 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.216100 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.216107 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.216119 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:56.216125 | orchestrator | 2025-05-14 02:45:56.216132 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-05-14 02:45:56.216138 | orchestrator | Wednesday 14 May 2025 02:42:18 +0000 (0:00:03.832) 0:01:14.067 ********* 2025-05-14 02:45:56.216158 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.216166 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.216172 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.216181 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.216187 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.216202 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.216213 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.216220 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.216226 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.216232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:45:56.216239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.216255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.216265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.216271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.216278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.216284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.216290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.216304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.216313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.216320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.216326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.216333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.216339 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.216352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.216362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.216368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.216436 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.216444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.216454 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.216464 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.216474 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.216480 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.216486 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.216492 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.216502 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.216511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:45:56.216522 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.216528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.216535 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.216546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.216696 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.216715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.216722 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.216728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.216739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.216750 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.216756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.216766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.216773 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.216779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.216854 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.216861 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.216867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.216877 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.216888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.216896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.216903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.216914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.216922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.216951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.216960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.216967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:45:56.216976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.216980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.216987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.216994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.216998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.217005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.217009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.217013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.217019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.217025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.217029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.217033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.217040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.217045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.217051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.217058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.217062 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-14 02:45:56.217069 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.217073 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.217077 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.217080 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.217090 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.217094 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.217101 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.217105 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-14 02:45:56.217109 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.217256 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.217263 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.217271 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.217279 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.217283 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.217287 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.217294 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-14 02:45:56.217301 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.217309 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.217313 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.217317 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.217321 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.217328 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.217334 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.217341 | orchestrator | 2025-05-14 02:45:56.217345 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-05-14 02:45:56.217349 | orchestrator | Wednesday 14 May 2025 02:42:23 +0000 (0:00:05.252) 0:01:19.319 ********* 2025-05-14 02:45:56.217353 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.217357 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.217361 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.217368 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.217391 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.217401 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.217408 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.217428 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.217434 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.217445 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.217453 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.217460 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.217464 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.217468 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.217472 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.217479 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.217486 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.217493 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.217497 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.217501 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.217505 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.217511 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.217520 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.217524 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.217528 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.217532 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.217536 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.217542 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-14 02:45:56.217553 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.217557 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.217561 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.217565 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.217569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:45:56.217843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.217872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.217879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.217885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.217891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.217898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.217912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.217923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.217932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.217938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.217945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.217950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.217957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.217971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.217980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.217986 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.217993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.217999 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.218012 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.218049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:45:56.218055 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-14 02:45:56.218062 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.218068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.218081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:45:56.218090 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.218097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.218104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.218150 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.218161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.218169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.218176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.218181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.218185 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.218189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.218196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.218203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.218209 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.218214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.218218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.218222 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.218229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.218233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.218291 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.218300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.218304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.218308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.218316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.218705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.218725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.218741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.218747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.218754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.218766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.218773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.218785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.218795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.218801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.218808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.218818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.218826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.218832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.218841 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-14 02:45:56.218847 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.218857 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.218863 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.218869 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.218881 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.218888 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.218894 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.218905 | orchestrator | 2025-05-14 02:45:56.218911 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-05-14 02:45:56.218918 | orchestrator | Wednesday 14 May 2025 02:42:33 +0000 (0:00:09.343) 0:01:28.662 ********* 2025-05-14 02:45:56.218924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:45:56.218931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.218940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.218949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.218955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.218966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.218972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.218978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.218987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.218996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.219002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.219012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.219018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.219025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.219034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.219042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.219265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.219275 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.219281 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.219292 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.219303 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.219309 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.219327 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.219334 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.219341 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.219347 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.219358 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.219558 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.219588 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.219596 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.219602 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.219609 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.219739 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.219760 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.219767 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:56.219775 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.219781 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.219787 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.219799 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.219811 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.219817 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.219824 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.219830 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.219837 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.219846 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.219861 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.219867 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.219874 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.219880 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.219888 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.219895 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.219908 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.219915 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.219921 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.219927 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.219933 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.219939 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.219952 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.219959 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.219966 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.219999 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.220007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:45:56.220022 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.220031 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.220037 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.220043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.220050 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.220056 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:56.220062 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.220078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.220087 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.220093 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.220099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.220106 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.220118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.220124 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:56.220134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.220140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.220146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.220152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.220158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.220168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.220178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.220188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.220194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.220201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.220208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.220218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.220231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:45:56.220238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.220244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.220250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.220262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.220271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.220281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.220287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.220294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.220300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.220706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.220722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.220748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.220760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.220767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.220774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.220786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.220792 | orchestrator | 2025-05-14 02:45:56.220798 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-05-14 02:45:56.220805 | orchestrator | Wednesday 14 May 2025 02:42:38 +0000 (0:00:05.608) 0:01:34.270 ********* 2025-05-14 02:45:56.220811 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:56.220817 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:56.220823 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:45:56.220829 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:45:56.220834 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:56.220840 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:45:56.220847 | orchestrator | 2025-05-14 02:45:56.220853 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-05-14 02:45:56.220859 | orchestrator | Wednesday 14 May 2025 02:42:44 +0000 (0:00:05.786) 0:01:40.057 ********* 2025-05-14 02:45:56.220885 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.220893 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.220899 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.220912 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.220916 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.220932 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.220942 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.220946 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.220950 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.220957 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.220961 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.220976 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.220980 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.220987 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.220991 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.221000 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.221004 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.221007 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:56.221022 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.221029 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.221033 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.221042 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.221049 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.221058 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.221064 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.221075 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.221081 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.221646 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.221671 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.221679 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.221706 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.221714 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.221724 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.221740 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.221746 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.221752 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:56.221758 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.221777 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.221787 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.221849 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.221857 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.221863 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.221881 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.221888 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.221897 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.221909 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.221916 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.221922 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.221928 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.221948 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.221958 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.221970 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.221977 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.221984 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:56.221990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:45:56.222010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.222323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:45:56.222346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.222353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.222360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.222478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.222492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.222504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.222511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.222517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.222524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.222544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.222552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.222565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.222571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.222577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.222583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.222590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.222698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.222715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.222721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.222727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.222734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.222740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.222746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.222765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.222778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.222786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.222792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.222798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.222817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.222829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.222835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.222842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:45:56.222848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.223042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.223062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.223070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.223074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.223078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.223083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.223087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.223133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.223141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.223145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.223149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.223153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.223158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.223182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.223188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.223192 | orchestrator | 2025-05-14 02:45:56.223196 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-05-14 02:45:56.223201 | orchestrator | Wednesday 14 May 2025 02:42:48 +0000 (0:00:04.032) 0:01:44.090 ********* 2025-05-14 02:45:56.223205 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:56.223209 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:56.223212 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:56.223216 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:56.223220 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:56.223223 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:56.223227 | orchestrator | 2025-05-14 02:45:56.223231 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-05-14 02:45:56.223235 | orchestrator | Wednesday 14 May 2025 02:42:50 +0000 (0:00:02.262) 0:01:46.353 ********* 2025-05-14 02:45:56.223238 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:56.223242 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:56.223246 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:56.223249 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:56.223253 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:56.223257 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:56.223260 | orchestrator | 2025-05-14 02:45:56.223264 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-05-14 02:45:56.223268 | orchestrator | Wednesday 14 May 2025 02:42:52 +0000 (0:00:02.257) 0:01:48.611 ********* 2025-05-14 02:45:56.223271 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:56.223275 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:56.223279 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:56.223282 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:56.223286 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:56.223290 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:56.223294 | orchestrator | 2025-05-14 02:45:56.223297 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-05-14 02:45:56.223301 | orchestrator | Wednesday 14 May 2025 02:42:54 +0000 (0:00:02.011) 0:01:50.622 ********* 2025-05-14 02:45:56.223309 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:56.223312 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:56.223316 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:56.223320 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:56.223324 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:56.223330 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:56.223336 | orchestrator | 2025-05-14 02:45:56.223341 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-05-14 02:45:56.223347 | orchestrator | Wednesday 14 May 2025 02:42:57 +0000 (0:00:02.210) 0:01:52.833 ********* 2025-05-14 02:45:56.223353 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:56.223358 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:56.223364 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:56.223369 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:56.223389 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:56.223396 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:56.223402 | orchestrator | 2025-05-14 02:45:56.223408 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-05-14 02:45:56.223414 | orchestrator | Wednesday 14 May 2025 02:42:59 +0000 (0:00:02.331) 0:01:55.165 ********* 2025-05-14 02:45:56.223420 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:56.223426 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:56.223432 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:56.223438 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:56.223490 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:56.223497 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:56.223502 | orchestrator | 2025-05-14 02:45:56.223508 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-05-14 02:45:56.223514 | orchestrator | Wednesday 14 May 2025 02:43:01 +0000 (0:00:01.848) 0:01:57.013 ********* 2025-05-14 02:45:56.223520 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-14 02:45:56.223528 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:56.223534 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-14 02:45:56.223540 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:56.223762 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-14 02:45:56.223774 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:56.223818 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-14 02:45:56.223826 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:56.223833 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-14 02:45:56.223839 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:56.223845 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-14 02:45:56.223851 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:56.223857 | orchestrator | 2025-05-14 02:45:56.223863 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-05-14 02:45:56.223869 | orchestrator | Wednesday 14 May 2025 02:43:03 +0000 (0:00:02.363) 0:01:59.377 ********* 2025-05-14 02:45:56.223882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.223898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.224003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.224179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.224262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/2025-05-14 02:45:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:45:56.224272 | orchestrator | config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.224285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.224298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.224305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.224311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.224317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.224363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.224370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.224423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.224433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.224438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.224443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.224494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.224500 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:56.224507 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.224515 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.224519 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.224523 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.224580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.224606 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.224614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.224618 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.224622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.224626 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.224650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.224655 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.224664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.225062 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.225078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.225085 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.225091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.225162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.225181 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.225188 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.225194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.225200 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.225206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.225455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.225487 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.225495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.225501 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.225508 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.225515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.225574 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.225592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.225599 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.225605 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.225611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.225650 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.225668 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.225691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.225697 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.225703 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:56.225709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.225715 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.225764 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.225772 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.225782 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.225788 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:56.225794 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.225800 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.225807 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.225817 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.225855 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.225865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.225872 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.225878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.225884 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.225926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.225937 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.225943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.225949 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:56.225955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.225962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.225972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.226059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.226072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.226077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.226081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.226085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.226094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.226128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.226146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.226152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.226156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.226161 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:56.226165 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.226200 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.226208 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.226212 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.226216 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.226220 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.226228 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.226232 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.226263 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.226270 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.226275 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.226279 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.226287 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.226291 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.226321 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.226329 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.226333 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.226337 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:56.226341 | orchestrator | 2025-05-14 02:45:56.226345 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-05-14 02:45:56.226349 | orchestrator | Wednesday 14 May 2025 02:43:06 +0000 (0:00:02.606) 0:02:01.984 ********* 2025-05-14 02:45:56.226359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.226363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.226411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.226420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.226424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.226432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.226436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.226440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.226469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.226477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.226481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.226485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.226493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.226497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.226526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.226535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.226539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.226547 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:56.226551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.226555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.226559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.226591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.226597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.226605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.226609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.226613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.226634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.226641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.226649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.226653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.226657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.226661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.226665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.226684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.226692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.226701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.226708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.226714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.226720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.226747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.226760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.226773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.226779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.226786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.226792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.226820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.226829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.226840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.226846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.226852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.226858 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:56.226865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.226890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.226900 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:56.226909 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.226915 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.226921 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.226927 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.226954 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.226967 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.226974 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.226980 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.226986 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.226992 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.227035 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.227049 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.227055 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.227061 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.227067 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.227074 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.227100 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.227113 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.227119 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:56.227125 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.227131 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.227138 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.227166 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.227182 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.227189 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.227196 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.227200 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.227204 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.227208 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.227232 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.227247 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.227251 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.227255 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.227259 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.227263 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.227270 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:56.227290 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.227297 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.227301 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.227305 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.227309 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.227327 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.227333 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.227338 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.227342 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.227346 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.227350 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.227356 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.227415 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.227424 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.227428 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.227432 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.227436 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.227445 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:56.227448 | orchestrator | 2025-05-14 02:45:56.227452 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-05-14 02:45:56.227457 | orchestrator | Wednesday 14 May 2025 02:43:08 +0000 (0:00:02.064) 0:02:04.049 ********* 2025-05-14 02:45:56.227460 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:56.227464 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:56.227468 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:56.227471 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:56.227475 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:56.227479 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:56.227483 | orchestrator | 2025-05-14 02:45:56.227486 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-05-14 02:45:56.227490 | orchestrator | Wednesday 14 May 2025 02:43:13 +0000 (0:00:04.676) 0:02:08.725 ********* 2025-05-14 02:45:56.227494 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:56.227497 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:56.227501 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:56.227516 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:45:56.227520 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:45:56.227524 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:45:56.227527 | orchestrator | 2025-05-14 02:45:56.227531 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-05-14 02:45:56.227535 | orchestrator | Wednesday 14 May 2025 02:43:19 +0000 (0:00:06.585) 0:02:15.311 ********* 2025-05-14 02:45:56.227539 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:56.227542 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:56.227546 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:56.227550 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:56.227554 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:56.227558 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:56.227561 | orchestrator | 2025-05-14 02:45:56.227565 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-05-14 02:45:56.227569 | orchestrator | Wednesday 14 May 2025 02:43:21 +0000 (0:00:02.019) 0:02:17.330 ********* 2025-05-14 02:45:56.227573 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:56.227576 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:56.227583 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:56.227586 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:56.227590 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:56.227594 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:56.227597 | orchestrator | 2025-05-14 02:45:56.227601 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-05-14 02:45:56.227605 | orchestrator | Wednesday 14 May 2025 02:43:23 +0000 (0:00:02.181) 0:02:19.511 ********* 2025-05-14 02:45:56.227609 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:56.227612 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:56.227616 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:56.227620 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:56.227624 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:56.227627 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:56.227631 | orchestrator | 2025-05-14 02:45:56.227635 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-05-14 02:45:56.227639 | orchestrator | Wednesday 14 May 2025 02:43:28 +0000 (0:00:04.900) 0:02:24.412 ********* 2025-05-14 02:45:56.227642 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:56.227646 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:56.227650 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:56.227658 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:56.227662 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:56.227665 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:56.227669 | orchestrator | 2025-05-14 02:45:56.227673 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-05-14 02:45:56.227677 | orchestrator | Wednesday 14 May 2025 02:43:33 +0000 (0:00:04.287) 0:02:28.699 ********* 2025-05-14 02:45:56.227683 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:56.227689 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:56.227695 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:56.227701 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:56.227706 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:56.227712 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:56.227718 | orchestrator | 2025-05-14 02:45:56.227724 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-05-14 02:45:56.227728 | orchestrator | Wednesday 14 May 2025 02:43:35 +0000 (0:00:02.255) 0:02:30.955 ********* 2025-05-14 02:45:56.227732 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:56.227736 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:56.227739 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:56.227743 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:56.227747 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:56.227751 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:56.227754 | orchestrator | 2025-05-14 02:45:56.227758 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-05-14 02:45:56.227762 | orchestrator | Wednesday 14 May 2025 02:43:38 +0000 (0:00:03.630) 0:02:34.586 ********* 2025-05-14 02:45:56.227766 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:56.227769 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:56.227773 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:56.227777 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:56.227780 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:56.227784 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:56.227788 | orchestrator | 2025-05-14 02:45:56.227792 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-05-14 02:45:56.227795 | orchestrator | Wednesday 14 May 2025 02:43:41 +0000 (0:00:02.277) 0:02:36.863 ********* 2025-05-14 02:45:56.227799 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:56.227803 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:56.227806 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:56.227810 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:56.227814 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:56.227818 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:56.227821 | orchestrator | 2025-05-14 02:45:56.227825 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-05-14 02:45:56.227829 | orchestrator | Wednesday 14 May 2025 02:43:44 +0000 (0:00:03.447) 0:02:40.310 ********* 2025-05-14 02:45:56.227833 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-14 02:45:56.227837 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:56.227840 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-14 02:45:56.227844 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:56.227848 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-14 02:45:56.227852 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:56.227856 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-14 02:45:56.227859 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:56.227863 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-14 02:45:56.227878 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:56.227882 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-14 02:45:56.227889 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:56.227893 | orchestrator | 2025-05-14 02:45:56.227897 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-05-14 02:45:56.227901 | orchestrator | Wednesday 14 May 2025 02:43:46 +0000 (0:00:02.171) 0:02:42.482 ********* 2025-05-14 02:45:56.227908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.227912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.227916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.227920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.227933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.227943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.227947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.227952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.227957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.227961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.227966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.227982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.227991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.227995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.228000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.228005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.228010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.228017 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:56.228032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.228037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.228042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.228047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.228051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.228068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.228075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.228080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.228084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.228089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.228093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.228102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.228115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.228122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.228126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.228131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.228136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.228146 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:56.228165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.228175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.228182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.228188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.228194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.228204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.228224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.228232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.228238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.228244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.228250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.228261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.228267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.228285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.228293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.228300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.228306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.228316 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:56.228322 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.228340 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.228349 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.228355 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.228361 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.228417 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.228425 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.228449 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.228459 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.228466 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.228472 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.228485 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.228492 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.228511 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.228520 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.228527 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.228533 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.228545 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.228565 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.228575 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.228581 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.228587 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.228599 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.228605 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:56.228612 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.228618 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.228637 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.228647 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.228654 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.228664 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.228670 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.228677 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.228696 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.228706 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.228712 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.228725 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:56.228731 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.228737 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.228755 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.228764 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.228771 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.228781 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.228787 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.228793 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.228811 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.228819 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.228851 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.228861 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.228868 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.228874 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.228892 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.228904 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.228910 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.228920 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:56.228927 | orchestrator | 2025-05-14 02:45:56.228933 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-05-14 02:45:56.228939 | orchestrator | Wednesday 14 May 2025 02:43:49 +0000 (0:00:02.225) 0:02:44.708 ********* 2025-05-14 02:45:56.228946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:45:56.228952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.228970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.228979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.228990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.228996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.229002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.229008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.229015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.229034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.229048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.229054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.229060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.229066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.229073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.229091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.229104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.229110 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.229116 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.229122 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.229131 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.229144 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.229150 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.229156 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.229162 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.229168 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.229179 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.229191 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.229197 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.229204 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.229209 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.229215 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.229228 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.229236 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.229243 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.229249 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-14 02:45:56.229256 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-14 02:45:56.229262 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.229275 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.229286 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.229292 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.229299 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.229306 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.229310 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.229320 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.229327 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.229331 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.229335 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.229339 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.229343 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.229358 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.229364 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.229368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:45:56.229389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.229394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.229403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-14 02:45:56.229409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.229413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.229417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.229421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.229430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.229436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.229443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.229447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-14 02:45:56.229451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.229455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.229463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.229470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.229477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.229481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.229485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.229489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.229492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.229500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.229507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.229513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.229517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.229521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.229525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.229532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.229538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.229545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.229549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.229553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.229561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.229567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.229573 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-14 02:45:56.229577 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.229581 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.229585 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.229592 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.229598 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.229602 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.229608 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.229612 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-14 02:45:56.229619 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.229623 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:45:56.229627 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:45:56.229633 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.229639 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-14 02:45:56.229643 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-14 02:45:56.229650 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-14 02:45:56.229653 | orchestrator | 2025-05-14 02:45:56.229657 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-14 02:45:56.229661 | orchestrator | Wednesday 14 May 2025 02:43:54 +0000 (0:00:05.194) 0:02:49.902 ********* 2025-05-14 02:45:56.229665 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:45:56.229669 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:45:56.229672 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:45:56.229676 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:45:56.229680 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:45:56.229684 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:45:56.229687 | orchestrator | 2025-05-14 02:45:56.229691 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-05-14 02:45:56.229695 | orchestrator | Wednesday 14 May 2025 02:43:54 +0000 (0:00:00.726) 0:02:50.629 ********* 2025-05-14 02:45:56.229699 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:45:56.229702 | orchestrator | 2025-05-14 02:45:56.229706 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-05-14 02:45:56.229710 | orchestrator | Wednesday 14 May 2025 02:43:57 +0000 (0:00:02.443) 0:02:53.073 ********* 2025-05-14 02:45:56.229714 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:45:56.229717 | orchestrator | 2025-05-14 02:45:56.229721 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-05-14 02:45:56.229725 | orchestrator | Wednesday 14 May 2025 02:43:59 +0000 (0:00:02.413) 0:02:55.486 ********* 2025-05-14 02:45:56.229728 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:45:56.229732 | orchestrator | 2025-05-14 02:45:56.229736 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-14 02:45:56.229740 | orchestrator | Wednesday 14 May 2025 02:44:40 +0000 (0:00:40.471) 0:03:35.958 ********* 2025-05-14 02:45:56.229743 | orchestrator | 2025-05-14 02:45:56.229749 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-14 02:45:56.229753 | orchestrator | Wednesday 14 May 2025 02:44:40 +0000 (0:00:00.057) 0:03:36.015 ********* 2025-05-14 02:45:56.229757 | orchestrator | 2025-05-14 02:45:56.229761 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-14 02:45:56.229764 | orchestrator | Wednesday 14 May 2025 02:44:40 +0000 (0:00:00.242) 0:03:36.257 ********* 2025-05-14 02:45:56.229768 | orchestrator | 2025-05-14 02:45:56.229772 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-14 02:45:56.229775 | orchestrator | Wednesday 14 May 2025 02:44:40 +0000 (0:00:00.058) 0:03:36.316 ********* 2025-05-14 02:45:56.229779 | orchestrator | 2025-05-14 02:45:56.229783 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-14 02:45:56.229786 | orchestrator | Wednesday 14 May 2025 02:44:40 +0000 (0:00:00.054) 0:03:36.370 ********* 2025-05-14 02:45:56.229790 | orchestrator | 2025-05-14 02:45:56.229794 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-14 02:45:56.229797 | orchestrator | Wednesday 14 May 2025 02:44:40 +0000 (0:00:00.051) 0:03:36.422 ********* 2025-05-14 02:45:56.229801 | orchestrator | 2025-05-14 02:45:56.229810 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-05-14 02:45:56.229814 | orchestrator | Wednesday 14 May 2025 02:44:40 +0000 (0:00:00.177) 0:03:36.599 ********* 2025-05-14 02:45:56.229818 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:45:56.229821 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:45:56.229825 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:45:56.229829 | orchestrator | 2025-05-14 02:45:56.229832 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-05-14 02:45:56.229836 | orchestrator | Wednesday 14 May 2025 02:45:03 +0000 (0:00:22.209) 0:03:58.808 ********* 2025-05-14 02:45:56.229840 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:45:56.229846 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:45:56.229852 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:45:56.229858 | orchestrator | 2025-05-14 02:45:56.229864 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:45:56.229870 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-14 02:45:56.229877 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-05-14 02:45:56.229883 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-05-14 02:45:56.229889 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-14 02:45:56.229894 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-14 02:45:56.229900 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-14 02:45:56.229906 | orchestrator | 2025-05-14 02:45:56.229912 | orchestrator | 2025-05-14 02:45:56.229917 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:45:56.229923 | orchestrator | Wednesday 14 May 2025 02:45:54 +0000 (0:00:50.922) 0:04:49.731 ********* 2025-05-14 02:45:56.229929 | orchestrator | =============================================================================== 2025-05-14 02:45:56.229935 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 50.92s 2025-05-14 02:45:56.229941 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 40.47s 2025-05-14 02:45:56.229947 | orchestrator | neutron : Restart neutron-server container ----------------------------- 22.21s 2025-05-14 02:45:56.229952 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 9.34s 2025-05-14 02:45:56.229958 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.51s 2025-05-14 02:45:56.229964 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 6.59s 2025-05-14 02:45:56.229970 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.30s 2025-05-14 02:45:56.229976 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 5.79s 2025-05-14 02:45:56.229981 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 5.63s 2025-05-14 02:45:56.229987 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 5.61s 2025-05-14 02:45:56.229993 | orchestrator | neutron : Copying over config.json files for services ------------------- 5.25s 2025-05-14 02:45:56.229999 | orchestrator | neutron : Check neutron containers -------------------------------------- 5.19s 2025-05-14 02:45:56.230005 | orchestrator | neutron : Copying over ironic_neutron_agent.ini ------------------------- 4.90s 2025-05-14 02:45:56.230011 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.70s 2025-05-14 02:45:56.230044 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 4.69s 2025-05-14 02:45:56.230055 | orchestrator | neutron : Copying over metadata_agent.ini ------------------------------- 4.68s 2025-05-14 02:45:56.230061 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 4.64s 2025-05-14 02:45:56.230067 | orchestrator | neutron : Copying over bgp_dragent.ini ---------------------------------- 4.29s 2025-05-14 02:45:56.230077 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.09s 2025-05-14 02:45:56.230083 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.03s 2025-05-14 02:45:59.250504 | orchestrator | 2025-05-14 02:45:59 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:45:59.251950 | orchestrator | 2025-05-14 02:45:59 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:45:59.253486 | orchestrator | 2025-05-14 02:45:59 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:45:59.255637 | orchestrator | 2025-05-14 02:45:59 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:45:59.256897 | orchestrator | 2025-05-14 02:45:59 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:45:59.256961 | orchestrator | 2025-05-14 02:45:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:46:02.294450 | orchestrator | 2025-05-14 02:46:02 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:46:02.294849 | orchestrator | 2025-05-14 02:46:02 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:46:02.296069 | orchestrator | 2025-05-14 02:46:02 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:46:02.297759 | orchestrator | 2025-05-14 02:46:02 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:46:02.299246 | orchestrator | 2025-05-14 02:46:02 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:46:02.299286 | orchestrator | 2025-05-14 02:46:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:46:05.339024 | orchestrator | 2025-05-14 02:46:05 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:46:05.340023 | orchestrator | 2025-05-14 02:46:05 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:46:05.342587 | orchestrator | 2025-05-14 02:46:05 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:46:05.343061 | orchestrator | 2025-05-14 02:46:05 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:46:05.344375 | orchestrator | 2025-05-14 02:46:05 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:46:05.345688 | orchestrator | 2025-05-14 02:46:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:46:08.374287 | orchestrator | 2025-05-14 02:46:08 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:46:08.374428 | orchestrator | 2025-05-14 02:46:08 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:46:08.374694 | orchestrator | 2025-05-14 02:46:08 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:46:08.375623 | orchestrator | 2025-05-14 02:46:08 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:46:08.375848 | orchestrator | 2025-05-14 02:46:08 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:46:08.375887 | orchestrator | 2025-05-14 02:46:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:46:11.418009 | orchestrator | 2025-05-14 02:46:11 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:46:11.418194 | orchestrator | 2025-05-14 02:46:11 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:46:11.418691 | orchestrator | 2025-05-14 02:46:11 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:46:11.419206 | orchestrator | 2025-05-14 02:46:11 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:46:11.421444 | orchestrator | 2025-05-14 02:46:11 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:46:11.421520 | orchestrator | 2025-05-14 02:46:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:46:14.460221 | orchestrator | 2025-05-14 02:46:14 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:46:14.463091 | orchestrator | 2025-05-14 02:46:14 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:46:14.463389 | orchestrator | 2025-05-14 02:46:14 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:46:14.464330 | orchestrator | 2025-05-14 02:46:14 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:46:14.464601 | orchestrator | 2025-05-14 02:46:14 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:46:14.464623 | orchestrator | 2025-05-14 02:46:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:46:17.499463 | orchestrator | 2025-05-14 02:46:17 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:46:17.500625 | orchestrator | 2025-05-14 02:46:17 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:46:17.500828 | orchestrator | 2025-05-14 02:46:17 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:46:17.501398 | orchestrator | 2025-05-14 02:46:17 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:46:17.501945 | orchestrator | 2025-05-14 02:46:17 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:46:17.501969 | orchestrator | 2025-05-14 02:46:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:46:20.525929 | orchestrator | 2025-05-14 02:46:20 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:46:20.526210 | orchestrator | 2025-05-14 02:46:20 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:46:20.527214 | orchestrator | 2025-05-14 02:46:20 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:46:20.528759 | orchestrator | 2025-05-14 02:46:20 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:46:20.529280 | orchestrator | 2025-05-14 02:46:20 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:46:20.529312 | orchestrator | 2025-05-14 02:46:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:46:23.569043 | orchestrator | 2025-05-14 02:46:23 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:46:23.569117 | orchestrator | 2025-05-14 02:46:23 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:46:23.569124 | orchestrator | 2025-05-14 02:46:23 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:46:23.569131 | orchestrator | 2025-05-14 02:46:23 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:46:23.569138 | orchestrator | 2025-05-14 02:46:23 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:46:23.569172 | orchestrator | 2025-05-14 02:46:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:46:26.604494 | orchestrator | 2025-05-14 02:46:26 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:46:26.605608 | orchestrator | 2025-05-14 02:46:26 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:46:26.606717 | orchestrator | 2025-05-14 02:46:26 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:46:26.607285 | orchestrator | 2025-05-14 02:46:26 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:46:26.608505 | orchestrator | 2025-05-14 02:46:26 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:46:26.608520 | orchestrator | 2025-05-14 02:46:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:46:29.644221 | orchestrator | 2025-05-14 02:46:29 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:46:29.644324 | orchestrator | 2025-05-14 02:46:29 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:46:29.644759 | orchestrator | 2025-05-14 02:46:29 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:46:29.645734 | orchestrator | 2025-05-14 02:46:29 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:46:29.646264 | orchestrator | 2025-05-14 02:46:29 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:46:29.646302 | orchestrator | 2025-05-14 02:46:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:46:32.688717 | orchestrator | 2025-05-14 02:46:32 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:46:32.689946 | orchestrator | 2025-05-14 02:46:32 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:46:32.690172 | orchestrator | 2025-05-14 02:46:32 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:46:32.690730 | orchestrator | 2025-05-14 02:46:32 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:46:32.691253 | orchestrator | 2025-05-14 02:46:32 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:46:32.691273 | orchestrator | 2025-05-14 02:46:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:46:35.713723 | orchestrator | 2025-05-14 02:46:35 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:46:35.713833 | orchestrator | 2025-05-14 02:46:35 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:46:35.714113 | orchestrator | 2025-05-14 02:46:35 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:46:35.714606 | orchestrator | 2025-05-14 02:46:35 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:46:35.714956 | orchestrator | 2025-05-14 02:46:35 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:46:35.714979 | orchestrator | 2025-05-14 02:46:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:46:38.746320 | orchestrator | 2025-05-14 02:46:38 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:46:38.746463 | orchestrator | 2025-05-14 02:46:38 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:46:38.749330 | orchestrator | 2025-05-14 02:46:38 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:46:38.749786 | orchestrator | 2025-05-14 02:46:38 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:46:38.750181 | orchestrator | 2025-05-14 02:46:38 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:46:38.750200 | orchestrator | 2025-05-14 02:46:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:46:41.785859 | orchestrator | 2025-05-14 02:46:41 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:46:41.785985 | orchestrator | 2025-05-14 02:46:41 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:46:41.786236 | orchestrator | 2025-05-14 02:46:41 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:46:41.787114 | orchestrator | 2025-05-14 02:46:41 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:46:41.787694 | orchestrator | 2025-05-14 02:46:41 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:46:41.787720 | orchestrator | 2025-05-14 02:46:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:46:44.832613 | orchestrator | 2025-05-14 02:46:44 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:46:44.832776 | orchestrator | 2025-05-14 02:46:44 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:46:44.833593 | orchestrator | 2025-05-14 02:46:44 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:46:44.834069 | orchestrator | 2025-05-14 02:46:44 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:46:44.841405 | orchestrator | 2025-05-14 02:46:44 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:46:44.841462 | orchestrator | 2025-05-14 02:46:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:46:47.865838 | orchestrator | 2025-05-14 02:46:47 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:46:47.865939 | orchestrator | 2025-05-14 02:46:47 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:46:47.866560 | orchestrator | 2025-05-14 02:46:47 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:46:47.866645 | orchestrator | 2025-05-14 02:46:47 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:46:47.866942 | orchestrator | 2025-05-14 02:46:47 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:46:47.867353 | orchestrator | 2025-05-14 02:46:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:46:50.896857 | orchestrator | 2025-05-14 02:46:50 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:46:50.896968 | orchestrator | 2025-05-14 02:46:50 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:46:50.897152 | orchestrator | 2025-05-14 02:46:50 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:46:50.897645 | orchestrator | 2025-05-14 02:46:50 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:46:50.898077 | orchestrator | 2025-05-14 02:46:50 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:46:50.898098 | orchestrator | 2025-05-14 02:46:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:46:53.933013 | orchestrator | 2025-05-14 02:46:53 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:46:53.933953 | orchestrator | 2025-05-14 02:46:53 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:46:53.934066 | orchestrator | 2025-05-14 02:46:53 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:46:53.934891 | orchestrator | 2025-05-14 02:46:53 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:46:53.935480 | orchestrator | 2025-05-14 02:46:53 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:46:53.935502 | orchestrator | 2025-05-14 02:46:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:46:56.966390 | orchestrator | 2025-05-14 02:46:56 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:46:56.966516 | orchestrator | 2025-05-14 02:46:56 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:46:56.966914 | orchestrator | 2025-05-14 02:46:56 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:46:56.967489 | orchestrator | 2025-05-14 02:46:56 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:46:56.969859 | orchestrator | 2025-05-14 02:46:56 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:46:56.969900 | orchestrator | 2025-05-14 02:46:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:46:59.995195 | orchestrator | 2025-05-14 02:46:59 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:46:59.996673 | orchestrator | 2025-05-14 02:46:59 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:46:59.997199 | orchestrator | 2025-05-14 02:46:59 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:46:59.998969 | orchestrator | 2025-05-14 02:46:59 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:46:59.999387 | orchestrator | 2025-05-14 02:46:59 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:46:59.999415 | orchestrator | 2025-05-14 02:46:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:47:03.028698 | orchestrator | 2025-05-14 02:47:03 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:47:03.028892 | orchestrator | 2025-05-14 02:47:03 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:47:03.029655 | orchestrator | 2025-05-14 02:47:03 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:47:03.030135 | orchestrator | 2025-05-14 02:47:03 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:47:03.031306 | orchestrator | 2025-05-14 02:47:03 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:47:03.031402 | orchestrator | 2025-05-14 02:47:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:47:06.070982 | orchestrator | 2025-05-14 02:47:06 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:47:06.071057 | orchestrator | 2025-05-14 02:47:06 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:47:06.072544 | orchestrator | 2025-05-14 02:47:06 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:47:06.072999 | orchestrator | 2025-05-14 02:47:06 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:47:06.073546 | orchestrator | 2025-05-14 02:47:06 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:47:06.073589 | orchestrator | 2025-05-14 02:47:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:47:09.102478 | orchestrator | 2025-05-14 02:47:09 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:47:09.105931 | orchestrator | 2025-05-14 02:47:09 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:47:09.106010 | orchestrator | 2025-05-14 02:47:09 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:47:09.106572 | orchestrator | 2025-05-14 02:47:09 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:47:09.107497 | orchestrator | 2025-05-14 02:47:09 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:47:09.107947 | orchestrator | 2025-05-14 02:47:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:47:12.140656 | orchestrator | 2025-05-14 02:47:12 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:47:12.142485 | orchestrator | 2025-05-14 02:47:12 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:47:12.142931 | orchestrator | 2025-05-14 02:47:12 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:47:12.144542 | orchestrator | 2025-05-14 02:47:12 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:47:12.145300 | orchestrator | 2025-05-14 02:47:12 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:47:12.145386 | orchestrator | 2025-05-14 02:47:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:47:15.183259 | orchestrator | 2025-05-14 02:47:15 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:47:15.190426 | orchestrator | 2025-05-14 02:47:15 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:47:15.190517 | orchestrator | 2025-05-14 02:47:15 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:47:15.190532 | orchestrator | 2025-05-14 02:47:15 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:47:15.192585 | orchestrator | 2025-05-14 02:47:15 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:47:15.192800 | orchestrator | 2025-05-14 02:47:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:47:18.242153 | orchestrator | 2025-05-14 02:47:18 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:47:18.242581 | orchestrator | 2025-05-14 02:47:18 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:47:18.243526 | orchestrator | 2025-05-14 02:47:18 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:47:18.244670 | orchestrator | 2025-05-14 02:47:18 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:47:18.244991 | orchestrator | 2025-05-14 02:47:18 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:47:18.245095 | orchestrator | 2025-05-14 02:47:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:47:21.295209 | orchestrator | 2025-05-14 02:47:21 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:47:21.295372 | orchestrator | 2025-05-14 02:47:21 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:47:21.295654 | orchestrator | 2025-05-14 02:47:21 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:47:21.299072 | orchestrator | 2025-05-14 02:47:21 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:47:21.299191 | orchestrator | 2025-05-14 02:47:21 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:47:21.299214 | orchestrator | 2025-05-14 02:47:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:47:24.326643 | orchestrator | 2025-05-14 02:47:24 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:47:24.326817 | orchestrator | 2025-05-14 02:47:24 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:47:24.327826 | orchestrator | 2025-05-14 02:47:24 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:47:24.331116 | orchestrator | 2025-05-14 02:47:24 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:47:24.331222 | orchestrator | 2025-05-14 02:47:24 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:47:24.331251 | orchestrator | 2025-05-14 02:47:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:47:27.382005 | orchestrator | 2025-05-14 02:47:27 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:47:27.382166 | orchestrator | 2025-05-14 02:47:27 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:47:27.382187 | orchestrator | 2025-05-14 02:47:27 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:47:27.384802 | orchestrator | 2025-05-14 02:47:27 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:47:27.387341 | orchestrator | 2025-05-14 02:47:27 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:47:27.387404 | orchestrator | 2025-05-14 02:47:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:47:30.437823 | orchestrator | 2025-05-14 02:47:30 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:47:30.439144 | orchestrator | 2025-05-14 02:47:30 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:47:30.440990 | orchestrator | 2025-05-14 02:47:30 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:47:30.442844 | orchestrator | 2025-05-14 02:47:30 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:47:30.443078 | orchestrator | 2025-05-14 02:47:30 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:47:30.443092 | orchestrator | 2025-05-14 02:47:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:47:33.505955 | orchestrator | 2025-05-14 02:47:33 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:47:33.507788 | orchestrator | 2025-05-14 02:47:33 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:47:33.509998 | orchestrator | 2025-05-14 02:47:33 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:47:33.512388 | orchestrator | 2025-05-14 02:47:33 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:47:33.514272 | orchestrator | 2025-05-14 02:47:33 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:47:33.514360 | orchestrator | 2025-05-14 02:47:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:47:36.561697 | orchestrator | 2025-05-14 02:47:36 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:47:36.567163 | orchestrator | 2025-05-14 02:47:36 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:47:36.567281 | orchestrator | 2025-05-14 02:47:36 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:47:36.568534 | orchestrator | 2025-05-14 02:47:36 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:47:36.570719 | orchestrator | 2025-05-14 02:47:36 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:47:36.570769 | orchestrator | 2025-05-14 02:47:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:47:39.630690 | orchestrator | 2025-05-14 02:47:39 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:47:39.632931 | orchestrator | 2025-05-14 02:47:39 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:47:39.635139 | orchestrator | 2025-05-14 02:47:39 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:47:39.637193 | orchestrator | 2025-05-14 02:47:39 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:47:39.638572 | orchestrator | 2025-05-14 02:47:39 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:47:39.638868 | orchestrator | 2025-05-14 02:47:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:47:42.691132 | orchestrator | 2025-05-14 02:47:42 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:47:42.691932 | orchestrator | 2025-05-14 02:47:42 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:47:42.693909 | orchestrator | 2025-05-14 02:47:42 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:47:42.699382 | orchestrator | 2025-05-14 02:47:42 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:47:42.699997 | orchestrator | 2025-05-14 02:47:42 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:47:42.702642 | orchestrator | 2025-05-14 02:47:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:47:45.749615 | orchestrator | 2025-05-14 02:47:45 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:47:45.749705 | orchestrator | 2025-05-14 02:47:45 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:47:45.749912 | orchestrator | 2025-05-14 02:47:45 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:47:45.750650 | orchestrator | 2025-05-14 02:47:45 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:47:45.751126 | orchestrator | 2025-05-14 02:47:45 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:47:45.751151 | orchestrator | 2025-05-14 02:47:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:47:48.796534 | orchestrator | 2025-05-14 02:47:48 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:47:48.804151 | orchestrator | 2025-05-14 02:47:48 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:47:48.806111 | orchestrator | 2025-05-14 02:47:48 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:47:48.808166 | orchestrator | 2025-05-14 02:47:48 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:47:48.810331 | orchestrator | 2025-05-14 02:47:48 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:47:48.810828 | orchestrator | 2025-05-14 02:47:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:47:51.872365 | orchestrator | 2025-05-14 02:47:51 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:47:51.873102 | orchestrator | 2025-05-14 02:47:51 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:47:51.873856 | orchestrator | 2025-05-14 02:47:51 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:47:51.874830 | orchestrator | 2025-05-14 02:47:51 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:47:51.875949 | orchestrator | 2025-05-14 02:47:51 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:47:51.875986 | orchestrator | 2025-05-14 02:47:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:47:54.925591 | orchestrator | 2025-05-14 02:47:54 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:47:54.925698 | orchestrator | 2025-05-14 02:47:54 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:47:54.925709 | orchestrator | 2025-05-14 02:47:54 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:47:54.925717 | orchestrator | 2025-05-14 02:47:54 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:47:54.926503 | orchestrator | 2025-05-14 02:47:54 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:47:54.926547 | orchestrator | 2025-05-14 02:47:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:47:57.970665 | orchestrator | 2025-05-14 02:47:57 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:47:57.972580 | orchestrator | 2025-05-14 02:47:57 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:47:57.976688 | orchestrator | 2025-05-14 02:47:57 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:47:57.977207 | orchestrator | 2025-05-14 02:47:57 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:47:57.981422 | orchestrator | 2025-05-14 02:47:57 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:47:57.981508 | orchestrator | 2025-05-14 02:47:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:48:01.029908 | orchestrator | 2025-05-14 02:48:01 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:48:01.030009 | orchestrator | 2025-05-14 02:48:01 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:48:01.030078 | orchestrator | 2025-05-14 02:48:01 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:48:01.030416 | orchestrator | 2025-05-14 02:48:01 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:48:01.031207 | orchestrator | 2025-05-14 02:48:01 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:48:01.031239 | orchestrator | 2025-05-14 02:48:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:48:04.064004 | orchestrator | 2025-05-14 02:48:04 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:48:04.064577 | orchestrator | 2025-05-14 02:48:04 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:48:04.065423 | orchestrator | 2025-05-14 02:48:04 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:48:04.066134 | orchestrator | 2025-05-14 02:48:04 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:48:04.066797 | orchestrator | 2025-05-14 02:48:04 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:48:04.066818 | orchestrator | 2025-05-14 02:48:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:48:07.127061 | orchestrator | 2025-05-14 02:48:07 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state STARTED 2025-05-14 02:48:07.127197 | orchestrator | 2025-05-14 02:48:07 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:48:07.128975 | orchestrator | 2025-05-14 02:48:07 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:48:07.130907 | orchestrator | 2025-05-14 02:48:07 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:48:07.132904 | orchestrator | 2025-05-14 02:48:07 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:48:07.132951 | orchestrator | 2025-05-14 02:48:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:48:10.183625 | orchestrator | 2025-05-14 02:48:10.183795 | orchestrator | 2025-05-14 02:48:10 | INFO  | Task f9661ea5-cb78-4a8b-b0c1-ca2308cfe11c is in state SUCCESS 2025-05-14 02:48:10.185017 | orchestrator | 2025-05-14 02:48:10.185081 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:48:10.185241 | orchestrator | 2025-05-14 02:48:10.185289 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:48:10.185305 | orchestrator | Wednesday 14 May 2025 02:43:45 +0000 (0:00:00.278) 0:00:00.278 ********* 2025-05-14 02:48:10.185320 | orchestrator | ok: [testbed-manager] 2025-05-14 02:48:10.185335 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:48:10.185349 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:48:10.185363 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:48:10.185378 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:48:10.185394 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:48:10.185408 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:48:10.185423 | orchestrator | 2025-05-14 02:48:10.185439 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:48:10.185453 | orchestrator | Wednesday 14 May 2025 02:43:46 +0000 (0:00:01.167) 0:00:01.445 ********* 2025-05-14 02:48:10.185469 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-05-14 02:48:10.185485 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-05-14 02:48:10.185501 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-05-14 02:48:10.185516 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-05-14 02:48:10.185533 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-05-14 02:48:10.185548 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-05-14 02:48:10.185564 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-05-14 02:48:10.185580 | orchestrator | 2025-05-14 02:48:10.185748 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-05-14 02:48:10.185767 | orchestrator | 2025-05-14 02:48:10.185784 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-05-14 02:48:10.185800 | orchestrator | Wednesday 14 May 2025 02:43:47 +0000 (0:00:00.793) 0:00:02.239 ********* 2025-05-14 02:48:10.185817 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:48:10.185835 | orchestrator | 2025-05-14 02:48:10.185852 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-05-14 02:48:10.185871 | orchestrator | Wednesday 14 May 2025 02:43:48 +0000 (0:00:01.728) 0:00:03.968 ********* 2025-05-14 02:48:10.185891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 02:48:10.185948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 02:48:10.185982 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-14 02:48:10.186106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:48:10.186133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:48:10.186192 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 02:48:10.186229 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:48:10.186296 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.186316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:48:10.186386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:48:10.186425 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.186441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 02:48:10.186457 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 02:48:10.186486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:48:10.186716 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:48:10.186745 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:48:10.186806 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.186861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:48:10.186882 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.186898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:48:10.186931 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 02:48:10.187367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:48:10.187404 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-14 02:48:10.187435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 02:48:10.187448 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:48:10.187482 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:48:10.187492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:48:10.187506 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:48:10.187522 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.187532 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.187542 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 02:48:10.187565 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.187577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:48:10.187600 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:48:10.187616 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.187642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 02:48:10.187667 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.187762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:48:10.187776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:48:10.187785 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:48:10.187800 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.187818 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.187828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:48:10.187844 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:48:10.187865 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:48:10.187875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.187884 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.187961 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:48:10.187971 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-14 02:48:10.187989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:48:10.188000 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 02:48:10.188015 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:48:10.188025 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:48:10.188034 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.188047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.188063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:48:10.188078 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.188087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.188097 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.188105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:48:10.188115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.188127 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:48:10.188144 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 02:48:10.188160 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:48:10.188169 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.188178 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.188187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:48:10.188201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 02:48:10.188223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:48:10.188233 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-14 02:48:10.188242 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.188251 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-14 02:48:10.188300 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.188324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:48:10.188340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.188374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:48:10.188391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.188406 | orchestrator | 2025-05-14 02:48:10.188419 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-05-14 02:48:10.188431 | orchestrator | Wednesday 14 May 2025 02:43:53 +0000 (0:00:04.607) 0:00:08.575 ********* 2025-05-14 02:48:10.188446 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:48:10.188463 | orchestrator | 2025-05-14 02:48:10.188477 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-05-14 02:48:10.188492 | orchestrator | Wednesday 14 May 2025 02:43:54 +0000 (0:00:01.651) 0:00:10.227 ********* 2025-05-14 02:48:10.188508 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-14 02:48:10.188525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:48:10.188542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:48:10.188568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:48:10.188606 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:48:10.188623 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:48:10.188638 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:48:10.188652 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:48:10.188662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:48:10.188671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:48:10.188680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:48:10.188700 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:48:10.188719 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:48:10.188736 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:48:10.188751 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:48:10.188767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:48:10.188780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:48:10.188802 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-14 02:48:10.188829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:48:10.188854 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-14 02:48:10.188869 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-14 02:48:10.188884 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-14 02:48:10.188900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:48:10.188916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:48:10.188932 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:48:10.188954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:48:10.188969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:48:10.188979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:48:10.188988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:48:10.188997 | orchestrator | 2025-05-14 02:48:10.189006 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-05-14 02:48:10.189015 | orchestrator | Wednesday 14 May 2025 02:44:00 +0000 (0:00:05.961) 0:00:16.189 ********* 2025-05-14 02:48:10.189024 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 02:48:10.189033 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 02:48:10.189049 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 02:48:10.189068 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 02:48:10.189078 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.189087 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:48:10.189097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 02:48:10.189106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.189115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.189124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 02:48:10.189145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.189154 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:48:10.189163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 02:48:10.189178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.189188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.189197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 02:48:10.189206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.189214 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:48:10.189224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 02:48:10.189241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.189255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.189296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 02:48:10.189307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.189316 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:48:10.189325 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 02:48:10.189334 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 02:48:10.189343 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:48:10.189359 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:48:10.189368 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 02:48:10.189377 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 02:48:10.189393 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:48:10.189402 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:48:10.189979 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 02:48:10.189994 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 02:48:10.190003 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:48:10.190013 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:48:10.190085 | orchestrator | 2025-05-14 02:48:10.190094 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-05-14 02:48:10.190111 | orchestrator | Wednesday 14 May 2025 02:44:03 +0000 (0:00:02.285) 0:00:18.474 ********* 2025-05-14 02:48:10.190121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 02:48:10.190130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.190139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.190154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 02:48:10.190170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.190180 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 02:48:10.190189 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 02:48:10.190205 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 02:48:10.190215 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 02:48:10.190229 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.190238 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:48:10.190247 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:48:10.190286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 02:48:10.190302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 02:48:10.190317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.190342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.190357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.190373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.190382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 02:48:10.190396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 02:48:10.190412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.190421 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:48:10.190430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.190448 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:48:10.190457 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 02:48:10.190465 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 02:48:10.190474 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:48:10.190483 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:48:10.190492 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 02:48:10.190505 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 02:48:10.190519 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:48:10.190528 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:48:10.190537 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-14 02:48:10.190554 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-14 02:48:10.190563 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:48:10.190572 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:48:10.190580 | orchestrator | 2025-05-14 02:48:10.190589 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-05-14 02:48:10.190598 | orchestrator | Wednesday 14 May 2025 02:44:06 +0000 (0:00:03.509) 0:00:21.983 ********* 2025-05-14 02:48:10.190607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 02:48:10.190623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 02:48:10.190639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 02:48:10.190650 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 02:48:10.190666 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 02:48:10.190676 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 02:48:10.190687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:48:10.190701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:48:10.190718 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-14 02:48:10.190739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:48:10.190750 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:48:10.190760 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.190770 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.190781 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:48:10.190794 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.190805 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.190820 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:48:10.190845 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.190860 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.190884 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:48:10.190901 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.190916 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.190931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:48:10.190951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:48:10.190994 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:48:10.191010 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 02:48:10.191028 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:48:10.191038 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.191047 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.191061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:48:10.191081 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:48:10.191091 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 02:48:10.191101 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:48:10.191110 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.191120 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.191141 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:48:10.191161 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:48:10.191171 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 02:48:10.191181 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:48:10.191190 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.191199 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.191212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:48:10.191231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:48:10.191240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:48:10.191249 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-14 02:48:10.191281 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.191292 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-14 02:48:10.191306 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:48:10.191322 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.191337 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-14 02:48:10.191346 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.191355 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-14 02:48:10.191364 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.191373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:48:10.191386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 02:48:10.191407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:48:10.191417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:48:10.191427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 02:48:10.191437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:48:10.191457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:48:10.191472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 02:48:10.191482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:48:10.191491 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:48:10.191500 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:48:10.191509 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.191531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:48:10.191540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.191555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:48:10.191565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:48:10.191574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.191583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.191592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:48:10.191606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.191619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:48:10.191817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.191842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:48:10.191856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.191871 | orchestrator | 2025-05-14 02:48:10.191885 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-05-14 02:48:10.191901 | orchestrator | Wednesday 14 May 2025 02:44:14 +0000 (0:00:07.660) 0:00:29.643 ********* 2025-05-14 02:48:10.191916 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-14 02:48:10.191932 | orchestrator | 2025-05-14 02:48:10.191945 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-05-14 02:48:10.191959 | orchestrator | Wednesday 14 May 2025 02:44:14 +0000 (0:00:00.591) 0:00:30.234 ********* 2025-05-14 02:48:10.191972 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1065597, 'dev': 166, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187642.6985252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.191991 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1065597, 'dev': 166, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187642.6985252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192000 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1065597, 'dev': 166, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187642.6985252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192016 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1065597, 'dev': 166, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187642.6985252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192057 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1065606, 'dev': 166, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187642.7005253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192067 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1065597, 'dev': 166, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187642.6985252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192076 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1065606, 'dev': 166, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187642.7005253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192085 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1065597, 'dev': 166, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187642.6985252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192101 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1065606, 'dev': 166, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187642.7005253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192110 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1065606, 'dev': 166, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187642.7005253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192123 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1065606, 'dev': 166, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187642.7005253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192156 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1065599, 'dev': 166, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187642.6985252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192166 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1065606, 'dev': 166, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187642.7005253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192175 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1065597, 'dev': 166, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187642.6985252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:48:10.192184 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1065599, 'dev': 166, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187642.6985252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192198 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1065599, 'dev': 166, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187642.6985252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192207 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1065599, 'dev': 166, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187642.6985252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192221 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1065603, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.6995254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192253 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1065599, 'dev': 166, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187642.6985252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192331 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1065599, 'dev': 166, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187642.6985252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192342 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1065603, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.6995254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192358 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1065603, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.6995254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192369 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1065603, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.6995254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192380 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1065603, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.6995254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192395 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1065622, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.7045255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192439 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1065603, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.6995254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192456 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1065622, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.7045255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192471 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1065622, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.7045255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192496 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1065606, 'dev': 166, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187642.7005253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:48:10.192513 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1065622, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.7045255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192529 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1065609, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.7015254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192550 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1065622, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.7045255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192592 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1065622, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.7045255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192604 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1065609, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.7015254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192614 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1065609, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.7015254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192631 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1065609, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.7015254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192641 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1065602, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.6995254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192652 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1065609, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.7015254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192671 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1065609, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.7015254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192705 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1065602, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.6995254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192715 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1065602, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.6995254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192725 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1065602, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.6995254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192745 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1065608, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.7005253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192754 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1065602, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.6995254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192763 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1065602, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.6995254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192777 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1065608, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.7005253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192807 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1065608, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.7005253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192817 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1065599, 'dev': 166, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187642.6985252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:48:10.192833 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1065608, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.7005253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192842 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1065608, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.7005253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192851 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1065608, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.7005253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192860 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1065621, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.7035253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192872 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1065621, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.7035253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192902 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1065621, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.7035253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192912 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1065621, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.7035253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192927 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1065621, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.7035253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192935 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1065621, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.7035253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192944 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1065601, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.6985252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192952 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1065601, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.6985252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192964 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1065601, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.6985252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.192992 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1065601, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.6985252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.193001 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1065601, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.6985252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.193015 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1065601, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.6985252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.193023 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1065613, 'dev': 166, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187642.7025254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.193031 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:48:10.193040 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1065613, 'dev': 166, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187642.7025254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.193048 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:48:10.193057 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1065613, 'dev': 166, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187642.7025254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.193065 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:48:10.193077 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1065613, 'dev': 166, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187642.7025254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.193085 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:48:10.193114 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1065613, 'dev': 166, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187642.7025254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.193129 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1065613, 'dev': 166, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187642.7025254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-14 02:48:10.193137 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:48:10.193145 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:48:10.193154 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1065603, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.6995254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:48:10.193162 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1065622, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.7045255, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:48:10.193170 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1065609, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.7015254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:48:10.193178 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1065602, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.6995254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:48:10.193190 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1065608, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.7005253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:48:10.193223 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1065621, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.7035253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:48:10.193232 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1065601, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.6985252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:48:10.193240 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1065613, 'dev': 166, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187642.7025254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-14 02:48:10.193249 | orchestrator | 2025-05-14 02:48:10.193257 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-05-14 02:48:10.193333 | orchestrator | Wednesday 14 May 2025 02:44:52 +0000 (0:00:37.958) 0:01:08.192 ********* 2025-05-14 02:48:10.193341 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-14 02:48:10.193350 | orchestrator | 2025-05-14 02:48:10.193357 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-05-14 02:48:10.193365 | orchestrator | Wednesday 14 May 2025 02:44:53 +0000 (0:00:00.421) 0:01:08.614 ********* 2025-05-14 02:48:10.193373 | orchestrator | [WARNING]: Skipped 2025-05-14 02:48:10.193382 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-14 02:48:10.193390 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-05-14 02:48:10.193397 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-14 02:48:10.193405 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-05-14 02:48:10.193413 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-14 02:48:10.193421 | orchestrator | [WARNING]: Skipped 2025-05-14 02:48:10.193428 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-14 02:48:10.193436 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-05-14 02:48:10.193444 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-14 02:48:10.193452 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-05-14 02:48:10.193460 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-14 02:48:10.193467 | orchestrator | [WARNING]: Skipped 2025-05-14 02:48:10.193475 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-14 02:48:10.193483 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-05-14 02:48:10.193491 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-14 02:48:10.193498 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-05-14 02:48:10.193506 | orchestrator | [WARNING]: Skipped 2025-05-14 02:48:10.193514 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-14 02:48:10.193529 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-05-14 02:48:10.193536 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-14 02:48:10.193542 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-05-14 02:48:10.193549 | orchestrator | [WARNING]: Skipped 2025-05-14 02:48:10.193556 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-14 02:48:10.193562 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-05-14 02:48:10.193572 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-14 02:48:10.193579 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-05-14 02:48:10.193585 | orchestrator | [WARNING]: Skipped 2025-05-14 02:48:10.193592 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-14 02:48:10.193598 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-05-14 02:48:10.193605 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-14 02:48:10.193611 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-05-14 02:48:10.193618 | orchestrator | [WARNING]: Skipped 2025-05-14 02:48:10.193624 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-14 02:48:10.193631 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-05-14 02:48:10.193637 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-14 02:48:10.193668 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-05-14 02:48:10.193676 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-14 02:48:10.193683 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-14 02:48:10.193689 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-14 02:48:10.193696 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-14 02:48:10.193702 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-14 02:48:10.193709 | orchestrator | 2025-05-14 02:48:10.193716 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-05-14 02:48:10.193722 | orchestrator | Wednesday 14 May 2025 02:44:54 +0000 (0:00:01.053) 0:01:09.668 ********* 2025-05-14 02:48:10.193729 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-14 02:48:10.193735 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:48:10.193742 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-14 02:48:10.193749 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:48:10.193755 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-14 02:48:10.193761 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:48:10.193768 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-14 02:48:10.193775 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:48:10.193782 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-14 02:48:10.193788 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:48:10.193795 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-14 02:48:10.193801 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:48:10.193808 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-05-14 02:48:10.193814 | orchestrator | 2025-05-14 02:48:10.193821 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-05-14 02:48:10.193827 | orchestrator | Wednesday 14 May 2025 02:45:14 +0000 (0:00:20.562) 0:01:30.231 ********* 2025-05-14 02:48:10.193834 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-14 02:48:10.193841 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:48:10.193853 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-14 02:48:10.193859 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:48:10.193866 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-14 02:48:10.193872 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:48:10.193879 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-14 02:48:10.193885 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:48:10.193892 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-14 02:48:10.193899 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:48:10.193905 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-14 02:48:10.193912 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:48:10.193918 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-05-14 02:48:10.193925 | orchestrator | 2025-05-14 02:48:10.193931 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-05-14 02:48:10.193938 | orchestrator | Wednesday 14 May 2025 02:45:19 +0000 (0:00:04.533) 0:01:34.765 ********* 2025-05-14 02:48:10.193944 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-14 02:48:10.193951 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:48:10.193958 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-14 02:48:10.193965 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:48:10.193971 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-14 02:48:10.193978 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:48:10.193984 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-14 02:48:10.193997 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:48:10.194004 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-14 02:48:10.194010 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:48:10.194047 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-14 02:48:10.194054 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:48:10.194060 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-05-14 02:48:10.194067 | orchestrator | 2025-05-14 02:48:10.194074 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-05-14 02:48:10.194081 | orchestrator | Wednesday 14 May 2025 02:45:23 +0000 (0:00:03.829) 0:01:38.594 ********* 2025-05-14 02:48:10.194087 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-14 02:48:10.194094 | orchestrator | 2025-05-14 02:48:10.194104 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-05-14 02:48:10.194111 | orchestrator | Wednesday 14 May 2025 02:45:24 +0000 (0:00:00.689) 0:01:39.283 ********* 2025-05-14 02:48:10.194118 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:48:10.194124 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:48:10.194131 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:48:10.194137 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:48:10.194144 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:48:10.194150 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:48:10.194157 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:48:10.194163 | orchestrator | 2025-05-14 02:48:10.194175 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-05-14 02:48:10.194182 | orchestrator | Wednesday 14 May 2025 02:45:24 +0000 (0:00:00.750) 0:01:40.034 ********* 2025-05-14 02:48:10.194189 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:48:10.194195 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:48:10.194202 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:48:10.194208 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:48:10.194215 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:48:10.194221 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:48:10.194228 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:48:10.194234 | orchestrator | 2025-05-14 02:48:10.194241 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-05-14 02:48:10.194247 | orchestrator | Wednesday 14 May 2025 02:45:28 +0000 (0:00:03.334) 0:01:43.368 ********* 2025-05-14 02:48:10.194254 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-14 02:48:10.194282 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-14 02:48:10.194294 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:48:10.194305 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:48:10.194316 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-14 02:48:10.194327 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:48:10.194339 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-14 02:48:10.194347 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:48:10.194354 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-14 02:48:10.194373 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:48:10.194380 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-14 02:48:10.194395 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:48:10.194402 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-14 02:48:10.194408 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:48:10.194414 | orchestrator | 2025-05-14 02:48:10.194421 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-05-14 02:48:10.194428 | orchestrator | Wednesday 14 May 2025 02:45:30 +0000 (0:00:02.407) 0:01:45.776 ********* 2025-05-14 02:48:10.194434 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-14 02:48:10.194441 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:48:10.194448 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-14 02:48:10.194454 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:48:10.194461 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-14 02:48:10.194467 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:48:10.194474 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-14 02:48:10.194480 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:48:10.194487 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-14 02:48:10.194493 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:48:10.194500 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-14 02:48:10.194506 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:48:10.194513 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-05-14 02:48:10.194520 | orchestrator | 2025-05-14 02:48:10.194526 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-05-14 02:48:10.194540 | orchestrator | Wednesday 14 May 2025 02:45:33 +0000 (0:00:03.210) 0:01:48.987 ********* 2025-05-14 02:48:10.194551 | orchestrator | [WARNING]: Skipped 2025-05-14 02:48:10.194558 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-05-14 02:48:10.194564 | orchestrator | due to this access issue: 2025-05-14 02:48:10.194571 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-05-14 02:48:10.194577 | orchestrator | not a directory 2025-05-14 02:48:10.194584 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-14 02:48:10.194590 | orchestrator | 2025-05-14 02:48:10.194597 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-05-14 02:48:10.194603 | orchestrator | Wednesday 14 May 2025 02:45:35 +0000 (0:00:01.584) 0:01:50.571 ********* 2025-05-14 02:48:10.194610 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:48:10.194616 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:48:10.194623 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:48:10.194630 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:48:10.194636 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:48:10.194643 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:48:10.194655 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:48:10.194662 | orchestrator | 2025-05-14 02:48:10.194668 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-05-14 02:48:10.194675 | orchestrator | Wednesday 14 May 2025 02:45:36 +0000 (0:00:00.777) 0:01:51.349 ********* 2025-05-14 02:48:10.194682 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:48:10.194688 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:48:10.194695 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:48:10.194701 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:48:10.194708 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:48:10.194714 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:48:10.194721 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:48:10.194727 | orchestrator | 2025-05-14 02:48:10.194734 | orchestrator | TASK [prometheus : Copying over prometheus msteams config file] **************** 2025-05-14 02:48:10.194740 | orchestrator | Wednesday 14 May 2025 02:45:36 +0000 (0:00:00.844) 0:01:52.193 ********* 2025-05-14 02:48:10.194747 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-14 02:48:10.194753 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:48:10.194763 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-14 02:48:10.194774 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:48:10.194792 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-14 02:48:10.194804 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:48:10.194814 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-14 02:48:10.194824 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:48:10.194834 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-14 02:48:10.194844 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:48:10.194855 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-14 02:48:10.194866 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:48:10.194878 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-14 02:48:10.194889 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:48:10.194901 | orchestrator | 2025-05-14 02:48:10.194913 | orchestrator | TASK [prometheus : Copying over prometheus msteams template file] ************** 2025-05-14 02:48:10.194921 | orchestrator | Wednesday 14 May 2025 02:45:39 +0000 (0:00:02.251) 0:01:54.444 ********* 2025-05-14 02:48:10.194927 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-14 02:48:10.194940 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:48:10.194947 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-14 02:48:10.194953 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:48:10.194960 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-14 02:48:10.194966 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:48:10.194973 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-14 02:48:10.194979 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:48:10.194986 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-14 02:48:10.194992 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:48:10.194999 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-14 02:48:10.195005 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:48:10.195012 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-14 02:48:10.195018 | orchestrator | skipping: [testbed-manager] 2025-05-14 02:48:10.195025 | orchestrator | 2025-05-14 02:48:10.195031 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-05-14 02:48:10.195038 | orchestrator | Wednesday 14 May 2025 02:45:42 +0000 (0:00:03.501) 0:01:57.945 ********* 2025-05-14 02:48:10.195049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 02:48:10.195064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 02:48:10.195072 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-14 02:48:10.195084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:48:10.195091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 02:48:10.195101 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 02:48:10.195113 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 02:48:10.195120 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-14 02:48:10.195127 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:48:10.195139 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.195146 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.195153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:48:10.195163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:48:10.195170 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:48:10.195181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:48:10.195188 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.195200 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.195207 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:48:10.195214 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:48:10.195221 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-14 02:48:10.195230 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.195241 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.195248 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.195255 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.195286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:48:10.195294 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:48:10.195302 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 02:48:10.195313 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:48:10.195326 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-14 02:48:10.195338 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.195345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:48:10.195352 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.195359 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:48:10.195371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:48:10.195382 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.195389 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:48:10.195401 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 02:48:10.195409 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:48:10.195416 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.195426 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:48:10.195437 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 02:48:10.195450 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.195457 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:48:10.195464 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.195471 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.195481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:48:10.195493 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:48:10.195504 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:48:10.195512 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.195518 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-14 02:48:10.195525 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.195532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:48:10.195543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:48:10.195569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 02:48:10.195593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:48:10.195606 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-14 02:48:10.195617 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.195629 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-14 02:48:10.195637 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.195645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:48:10.195667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 02:48:10.195702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:48:10.195710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-14 02:48:10.195729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-14 02:48:10.195744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-14 02:48:10.195756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:48:10.195763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.195770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:48:10.195777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.195783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:48:10.195790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.195801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:48:10.195817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.195825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-14 02:48:10.195832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.195839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-14 02:48:10.195845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-14 02:48:10.195852 | orchestrator | 2025-05-14 02:48:10.195859 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-05-14 02:48:10.195866 | orchestrator | Wednesday 14 May 2025 02:45:48 +0000 (0:00:05.930) 0:02:03.875 ********* 2025-05-14 02:48:10.195873 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-05-14 02:48:10.195879 | orchestrator | 2025-05-14 02:48:10.195886 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-14 02:48:10.195893 | orchestrator | Wednesday 14 May 2025 02:45:51 +0000 (0:00:02.563) 0:02:06.439 ********* 2025-05-14 02:48:10.195900 | orchestrator | 2025-05-14 02:48:10.195906 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-14 02:48:10.195913 | orchestrator | Wednesday 14 May 2025 02:45:51 +0000 (0:00:00.053) 0:02:06.492 ********* 2025-05-14 02:48:10.195919 | orchestrator | 2025-05-14 02:48:10.195926 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-14 02:48:10.195938 | orchestrator | Wednesday 14 May 2025 02:45:51 +0000 (0:00:00.193) 0:02:06.686 ********* 2025-05-14 02:48:10.195944 | orchestrator | 2025-05-14 02:48:10.195951 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-14 02:48:10.195957 | orchestrator | Wednesday 14 May 2025 02:45:51 +0000 (0:00:00.074) 0:02:06.760 ********* 2025-05-14 02:48:10.195964 | orchestrator | 2025-05-14 02:48:10.195971 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-14 02:48:10.195977 | orchestrator | Wednesday 14 May 2025 02:45:51 +0000 (0:00:00.049) 0:02:06.810 ********* 2025-05-14 02:48:10.195983 | orchestrator | 2025-05-14 02:48:10.195995 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-14 02:48:10.196002 | orchestrator | Wednesday 14 May 2025 02:45:51 +0000 (0:00:00.049) 0:02:06.859 ********* 2025-05-14 02:48:10.196009 | orchestrator | 2025-05-14 02:48:10.196015 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-14 02:48:10.196022 | orchestrator | Wednesday 14 May 2025 02:45:51 +0000 (0:00:00.169) 0:02:07.028 ********* 2025-05-14 02:48:10.196028 | orchestrator | 2025-05-14 02:48:10.196035 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-05-14 02:48:10.196041 | orchestrator | Wednesday 14 May 2025 02:45:51 +0000 (0:00:00.063) 0:02:07.092 ********* 2025-05-14 02:48:10.196048 | orchestrator | changed: [testbed-manager] 2025-05-14 02:48:10.196054 | orchestrator | 2025-05-14 02:48:10.196061 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-05-14 02:48:10.196067 | orchestrator | Wednesday 14 May 2025 02:46:07 +0000 (0:00:15.998) 0:02:23.090 ********* 2025-05-14 02:48:10.196074 | orchestrator | changed: [testbed-manager] 2025-05-14 02:48:10.196081 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:48:10.196090 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:48:10.196097 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:48:10.196104 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:48:10.196110 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:48:10.196117 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:48:10.196123 | orchestrator | 2025-05-14 02:48:10.196130 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-05-14 02:48:10.196137 | orchestrator | Wednesday 14 May 2025 02:46:29 +0000 (0:00:21.794) 0:02:44.885 ********* 2025-05-14 02:48:10.196143 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:48:10.196150 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:48:10.196157 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:48:10.196163 | orchestrator | 2025-05-14 02:48:10.196170 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-05-14 02:48:10.196176 | orchestrator | Wednesday 14 May 2025 02:46:44 +0000 (0:00:14.949) 0:02:59.835 ********* 2025-05-14 02:48:10.196183 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:48:10.196189 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:48:10.196196 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:48:10.196202 | orchestrator | 2025-05-14 02:48:10.196209 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-05-14 02:48:10.196215 | orchestrator | Wednesday 14 May 2025 02:46:59 +0000 (0:00:15.121) 0:03:14.957 ********* 2025-05-14 02:48:10.196222 | orchestrator | changed: [testbed-manager] 2025-05-14 02:48:10.196229 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:48:10.196238 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:48:10.196250 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:48:10.196279 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:48:10.196290 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:48:10.196300 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:48:10.196310 | orchestrator | 2025-05-14 02:48:10.196319 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-05-14 02:48:10.196330 | orchestrator | Wednesday 14 May 2025 02:47:19 +0000 (0:00:19.394) 0:03:34.351 ********* 2025-05-14 02:48:10.196341 | orchestrator | changed: [testbed-manager] 2025-05-14 02:48:10.196362 | orchestrator | 2025-05-14 02:48:10.196373 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-05-14 02:48:10.196384 | orchestrator | Wednesday 14 May 2025 02:47:34 +0000 (0:00:14.933) 0:03:49.285 ********* 2025-05-14 02:48:10.196396 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:48:10.196407 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:48:10.196418 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:48:10.196429 | orchestrator | 2025-05-14 02:48:10.196440 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-05-14 02:48:10.196450 | orchestrator | Wednesday 14 May 2025 02:47:45 +0000 (0:00:11.903) 0:04:01.189 ********* 2025-05-14 02:48:10.196460 | orchestrator | changed: [testbed-manager] 2025-05-14 02:48:10.196471 | orchestrator | 2025-05-14 02:48:10.196482 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-05-14 02:48:10.196494 | orchestrator | Wednesday 14 May 2025 02:47:54 +0000 (0:00:08.929) 0:04:10.118 ********* 2025-05-14 02:48:10.196501 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:48:10.196507 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:48:10.196514 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:48:10.196520 | orchestrator | 2025-05-14 02:48:10.196527 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:48:10.196534 | orchestrator | testbed-manager : ok=24  changed=15  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-05-14 02:48:10.196541 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-14 02:48:10.196548 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-14 02:48:10.196554 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-14 02:48:10.196561 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-14 02:48:10.196568 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-14 02:48:10.196574 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-14 02:48:10.196581 | orchestrator | 2025-05-14 02:48:10.196587 | orchestrator | 2025-05-14 02:48:10.196598 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:48:10.196605 | orchestrator | Wednesday 14 May 2025 02:48:09 +0000 (0:00:14.535) 0:04:24.653 ********* 2025-05-14 02:48:10.196611 | orchestrator | =============================================================================== 2025-05-14 02:48:10.196618 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 37.96s 2025-05-14 02:48:10.196624 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 21.80s 2025-05-14 02:48:10.196631 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 20.56s 2025-05-14 02:48:10.196638 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 19.39s 2025-05-14 02:48:10.196644 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 16.00s 2025-05-14 02:48:10.196651 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 15.12s 2025-05-14 02:48:10.196662 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 14.95s 2025-05-14 02:48:10.196669 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 14.93s 2025-05-14 02:48:10.196675 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 14.54s 2025-05-14 02:48:10.196682 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 11.90s 2025-05-14 02:48:10.196693 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 8.93s 2025-05-14 02:48:10.196700 | orchestrator | prometheus : Copying over config.json files ----------------------------- 7.66s 2025-05-14 02:48:10.196707 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.96s 2025-05-14 02:48:10.196713 | orchestrator | prometheus : Check prometheus containers -------------------------------- 5.93s 2025-05-14 02:48:10.196719 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.61s 2025-05-14 02:48:10.196726 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.53s 2025-05-14 02:48:10.196733 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 3.83s 2025-05-14 02:48:10.196739 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 3.51s 2025-05-14 02:48:10.196746 | orchestrator | prometheus : Copying over prometheus msteams template file -------------- 3.50s 2025-05-14 02:48:10.196752 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 3.33s 2025-05-14 02:48:10.196759 | orchestrator | 2025-05-14 02:48:10 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:48:10.196765 | orchestrator | 2025-05-14 02:48:10 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:48:10.196772 | orchestrator | 2025-05-14 02:48:10 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:48:10.196779 | orchestrator | 2025-05-14 02:48:10 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:48:10.196785 | orchestrator | 2025-05-14 02:48:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:48:13.222446 | orchestrator | 2025-05-14 02:48:13 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:48:13.222669 | orchestrator | 2025-05-14 02:48:13 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:48:13.224003 | orchestrator | 2025-05-14 02:48:13 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:48:13.224064 | orchestrator | 2025-05-14 02:48:13 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:48:13.227934 | orchestrator | 2025-05-14 02:48:13 | INFO  | Task 7b74e9fe-277f-43b5-9289-ced26c5f6132 is in state STARTED 2025-05-14 02:48:13.233222 | orchestrator | 2025-05-14 02:48:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:48:16.270550 | orchestrator | 2025-05-14 02:48:16 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:48:16.270670 | orchestrator | 2025-05-14 02:48:16 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:48:16.273752 | orchestrator | 2025-05-14 02:48:16 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:48:16.273810 | orchestrator | 2025-05-14 02:48:16 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:48:16.274298 | orchestrator | 2025-05-14 02:48:16 | INFO  | Task 7b74e9fe-277f-43b5-9289-ced26c5f6132 is in state STARTED 2025-05-14 02:48:16.274452 | orchestrator | 2025-05-14 02:48:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:48:19.331109 | orchestrator | 2025-05-14 02:48:19 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:48:19.335100 | orchestrator | 2025-05-14 02:48:19 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:48:19.335191 | orchestrator | 2025-05-14 02:48:19 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:48:19.337392 | orchestrator | 2025-05-14 02:48:19 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:48:19.339113 | orchestrator | 2025-05-14 02:48:19 | INFO  | Task 7b74e9fe-277f-43b5-9289-ced26c5f6132 is in state STARTED 2025-05-14 02:48:19.339136 | orchestrator | 2025-05-14 02:48:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:48:22.380871 | orchestrator | 2025-05-14 02:48:22 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:48:22.381570 | orchestrator | 2025-05-14 02:48:22 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:48:22.382754 | orchestrator | 2025-05-14 02:48:22 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:48:22.383687 | orchestrator | 2025-05-14 02:48:22 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:48:22.384827 | orchestrator | 2025-05-14 02:48:22 | INFO  | Task 7b74e9fe-277f-43b5-9289-ced26c5f6132 is in state STARTED 2025-05-14 02:48:22.384874 | orchestrator | 2025-05-14 02:48:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:48:25.442123 | orchestrator | 2025-05-14 02:48:25 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:48:25.442225 | orchestrator | 2025-05-14 02:48:25 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:48:25.442235 | orchestrator | 2025-05-14 02:48:25 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:48:25.442242 | orchestrator | 2025-05-14 02:48:25 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:48:25.442271 | orchestrator | 2025-05-14 02:48:25 | INFO  | Task 7b74e9fe-277f-43b5-9289-ced26c5f6132 is in state STARTED 2025-05-14 02:48:25.442279 | orchestrator | 2025-05-14 02:48:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:48:28.480123 | orchestrator | 2025-05-14 02:48:28 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:48:28.480244 | orchestrator | 2025-05-14 02:48:28 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:48:28.483208 | orchestrator | 2025-05-14 02:48:28 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:48:28.483638 | orchestrator | 2025-05-14 02:48:28 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:48:28.484453 | orchestrator | 2025-05-14 02:48:28 | INFO  | Task 7b74e9fe-277f-43b5-9289-ced26c5f6132 is in state STARTED 2025-05-14 02:48:28.484498 | orchestrator | 2025-05-14 02:48:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:48:31.524727 | orchestrator | 2025-05-14 02:48:31 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:48:31.526320 | orchestrator | 2025-05-14 02:48:31 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:48:31.528324 | orchestrator | 2025-05-14 02:48:31 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:48:31.529855 | orchestrator | 2025-05-14 02:48:31 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:48:31.531976 | orchestrator | 2025-05-14 02:48:31 | INFO  | Task 7b74e9fe-277f-43b5-9289-ced26c5f6132 is in state STARTED 2025-05-14 02:48:31.532321 | orchestrator | 2025-05-14 02:48:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:48:34.576373 | orchestrator | 2025-05-14 02:48:34 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:48:34.578216 | orchestrator | 2025-05-14 02:48:34 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:48:34.580365 | orchestrator | 2025-05-14 02:48:34 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state STARTED 2025-05-14 02:48:34.582057 | orchestrator | 2025-05-14 02:48:34 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:48:34.583169 | orchestrator | 2025-05-14 02:48:34 | INFO  | Task 7b74e9fe-277f-43b5-9289-ced26c5f6132 is in state STARTED 2025-05-14 02:48:34.583203 | orchestrator | 2025-05-14 02:48:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:48:37.626071 | orchestrator | 2025-05-14 02:48:37 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:48:37.626192 | orchestrator | 2025-05-14 02:48:37 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state STARTED 2025-05-14 02:48:37.627739 | orchestrator | 2025-05-14 02:48:37 | INFO  | Task d3a1ad7e-1a7d-461a-84a8-3254441f085c is in state SUCCESS 2025-05-14 02:48:37.629431 | orchestrator | 2025-05-14 02:48:37.629479 | orchestrator | 2025-05-14 02:48:37.629490 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:48:37.629501 | orchestrator | 2025-05-14 02:48:37.629511 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:48:37.629522 | orchestrator | Wednesday 14 May 2025 02:45:24 +0000 (0:00:00.271) 0:00:00.271 ********* 2025-05-14 02:48:37.629532 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:48:37.629544 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:48:37.629554 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:48:37.629565 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:48:37.629577 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:48:37.629588 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:48:37.629723 | orchestrator | 2025-05-14 02:48:37.630180 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:48:37.630199 | orchestrator | Wednesday 14 May 2025 02:45:25 +0000 (0:00:00.549) 0:00:00.821 ********* 2025-05-14 02:48:37.630211 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-05-14 02:48:37.630222 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-05-14 02:48:37.630233 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-05-14 02:48:37.630264 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-05-14 02:48:37.630276 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-05-14 02:48:37.630287 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-05-14 02:48:37.630298 | orchestrator | 2025-05-14 02:48:37.630309 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-05-14 02:48:37.630319 | orchestrator | 2025-05-14 02:48:37.630330 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-14 02:48:37.630341 | orchestrator | Wednesday 14 May 2025 02:45:26 +0000 (0:00:01.016) 0:00:01.837 ********* 2025-05-14 02:48:37.630351 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:48:37.630363 | orchestrator | 2025-05-14 02:48:37.630504 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-05-14 02:48:37.630516 | orchestrator | Wednesday 14 May 2025 02:45:27 +0000 (0:00:01.390) 0:00:03.228 ********* 2025-05-14 02:48:37.630527 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-05-14 02:48:37.630537 | orchestrator | 2025-05-14 02:48:37.630548 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-05-14 02:48:37.630558 | orchestrator | Wednesday 14 May 2025 02:45:30 +0000 (0:00:03.470) 0:00:06.699 ********* 2025-05-14 02:48:37.630569 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-05-14 02:48:37.630580 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-05-14 02:48:37.630916 | orchestrator | 2025-05-14 02:48:37.630935 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-05-14 02:48:37.630945 | orchestrator | Wednesday 14 May 2025 02:45:37 +0000 (0:00:06.775) 0:00:13.474 ********* 2025-05-14 02:48:37.630953 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-14 02:48:37.630962 | orchestrator | 2025-05-14 02:48:37.630971 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-05-14 02:48:37.630980 | orchestrator | Wednesday 14 May 2025 02:45:41 +0000 (0:00:03.958) 0:00:17.432 ********* 2025-05-14 02:48:37.630989 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-14 02:48:37.631000 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-05-14 02:48:37.631009 | orchestrator | 2025-05-14 02:48:37.631019 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-05-14 02:48:37.631029 | orchestrator | Wednesday 14 May 2025 02:45:45 +0000 (0:00:04.140) 0:00:21.573 ********* 2025-05-14 02:48:37.631038 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-14 02:48:37.631047 | orchestrator | 2025-05-14 02:48:37.631057 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-05-14 02:48:37.631066 | orchestrator | Wednesday 14 May 2025 02:45:49 +0000 (0:00:03.433) 0:00:25.007 ********* 2025-05-14 02:48:37.631076 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-05-14 02:48:37.631086 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-05-14 02:48:37.631095 | orchestrator | 2025-05-14 02:48:37.631104 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-05-14 02:48:37.631114 | orchestrator | Wednesday 14 May 2025 02:45:58 +0000 (0:00:09.018) 0:00:34.025 ********* 2025-05-14 02:48:37.631175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 02:48:37.631191 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:48:37.631202 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.631227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 02:48:37.631238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:48:37.631302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.631346 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:48:37.631355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.631560 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-14 02:48:37.631576 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.631586 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:48:37.631781 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.631799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 02:48:37.631819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:48:37.631830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.631840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.631855 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-14 02:48:37.631894 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-14 02:48:37.631915 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-14 02:48:37.631926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:48:37.631936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.631945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.631987 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-14 02:48:37.632000 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-14 02:48:37.632017 | orchestrator | 2025-05-14 02:48:37.632028 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-14 02:48:37.632038 | orchestrator | Wednesday 14 May 2025 02:46:01 +0000 (0:00:02.825) 0:00:36.850 ********* 2025-05-14 02:48:37.632047 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:48:37.632056 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:48:37.632065 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:48:37.632074 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:48:37.632084 | orchestrator | 2025-05-14 02:48:37.632094 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-05-14 02:48:37.632103 | orchestrator | Wednesday 14 May 2025 02:46:02 +0000 (0:00:00.895) 0:00:37.746 ********* 2025-05-14 02:48:37.632112 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-05-14 02:48:37.632121 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-05-14 02:48:37.632131 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-05-14 02:48:37.632141 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-05-14 02:48:37.632151 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-05-14 02:48:37.632159 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-05-14 02:48:37.632168 | orchestrator | 2025-05-14 02:48:37.632177 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-05-14 02:48:37.632186 | orchestrator | Wednesday 14 May 2025 02:46:05 +0000 (0:00:03.885) 0:00:41.631 ********* 2025-05-14 02:48:37.632196 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-14 02:48:37.632208 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-14 02:48:37.632272 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-14 02:48:37.632301 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-14 02:48:37.632310 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-14 02:48:37.632319 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-14 02:48:37.632333 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-14 02:48:37.632378 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-14 02:48:37.632391 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-14 02:48:37.632400 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-14 02:48:37.632410 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-14 02:48:37.632449 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-14 02:48:37.632465 | orchestrator | 2025-05-14 02:48:37.632472 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-05-14 02:48:37.632478 | orchestrator | Wednesday 14 May 2025 02:46:10 +0000 (0:00:04.993) 0:00:46.625 ********* 2025-05-14 02:48:37.632484 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-14 02:48:37.632491 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-14 02:48:37.632498 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-14 02:48:37.632504 | orchestrator | 2025-05-14 02:48:37.632510 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-05-14 02:48:37.632516 | orchestrator | Wednesday 14 May 2025 02:46:14 +0000 (0:00:03.909) 0:00:50.535 ********* 2025-05-14 02:48:37.632522 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-05-14 02:48:37.632529 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-05-14 02:48:37.632535 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-05-14 02:48:37.632541 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-05-14 02:48:37.632547 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-05-14 02:48:37.632554 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-05-14 02:48:37.632562 | orchestrator | 2025-05-14 02:48:37.632571 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-05-14 02:48:37.632580 | orchestrator | Wednesday 14 May 2025 02:46:19 +0000 (0:00:04.219) 0:00:54.754 ********* 2025-05-14 02:48:37.632589 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-05-14 02:48:37.632599 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-05-14 02:48:37.632608 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-05-14 02:48:37.632617 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-05-14 02:48:37.632627 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-05-14 02:48:37.632636 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-05-14 02:48:37.632646 | orchestrator | 2025-05-14 02:48:37.632655 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-05-14 02:48:37.632665 | orchestrator | Wednesday 14 May 2025 02:46:20 +0000 (0:00:01.674) 0:00:56.428 ********* 2025-05-14 02:48:37.632674 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:48:37.632683 | orchestrator | 2025-05-14 02:48:37.632693 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-05-14 02:48:37.632702 | orchestrator | Wednesday 14 May 2025 02:46:20 +0000 (0:00:00.105) 0:00:56.534 ********* 2025-05-14 02:48:37.632712 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:48:37.632720 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:48:37.632729 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:48:37.632738 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:48:37.632746 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:48:37.632751 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:48:37.632757 | orchestrator | 2025-05-14 02:48:37.632762 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-14 02:48:37.632768 | orchestrator | Wednesday 14 May 2025 02:46:21 +0000 (0:00:00.733) 0:00:57.267 ********* 2025-05-14 02:48:37.632774 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:48:37.632781 | orchestrator | 2025-05-14 02:48:37.632786 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-05-14 02:48:37.632797 | orchestrator | Wednesday 14 May 2025 02:46:23 +0000 (0:00:01.572) 0:00:58.840 ********* 2025-05-14 02:48:37.632803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 02:48:37.632836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 02:48:37.632843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 02:48:37.632850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:48:37.632857 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-14 02:48:37.632866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:48:37.632889 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-14 02:48:37.632896 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-14 02:48:37.632902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:48:37.632908 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-14 02:48:37.632918 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-14 02:48:37.632923 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-14 02:48:37.632929 | orchestrator | 2025-05-14 02:48:37.632935 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-05-14 02:48:37.632940 | orchestrator | Wednesday 14 May 2025 02:46:26 +0000 (0:00:03.527) 0:01:02.367 ********* 2025-05-14 02:48:37.632963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:48:37.632970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.632976 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:48:37.632982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:48:37.632991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.632996 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:48:37.633002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:48:37.633028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.633035 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:48:37.633041 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.633046 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.633056 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:48:37.633062 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.633068 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.633073 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:48:37.633097 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.633104 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.633109 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:48:37.633115 | orchestrator | 2025-05-14 02:48:37.633120 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-05-14 02:48:37.633126 | orchestrator | Wednesday 14 May 2025 02:46:27 +0000 (0:00:01.144) 0:01:03.512 ********* 2025-05-14 02:48:37.633132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:48:37.633144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.633150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:48:37.633178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.633185 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:48:37.633191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:48:37.633197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.633206 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:48:37.633212 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:48:37.633218 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.633223 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.633229 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:48:37.633280 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.633288 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.633293 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:48:37.633298 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.633307 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.633312 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:48:37.633317 | orchestrator | 2025-05-14 02:48:37.633322 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-05-14 02:48:37.633327 | orchestrator | Wednesday 14 May 2025 02:46:30 +0000 (0:00:02.819) 0:01:06.331 ********* 2025-05-14 02:48:37.633332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 02:48:37.633355 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:48:37.633361 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.633369 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:48:37.633374 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.633380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 02:48:37.633400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 02:48:37.633407 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:48:37.633418 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.633427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:48:37.633436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.633444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.633470 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-14 02:48:37.633488 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:48:37.633496 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-14 02:48:37.633504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.633513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.633545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:48:37.633553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.633563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.633569 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-14 02:48:37.633577 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-14 02:48:37.633594 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-14 02:48:37.633608 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-14 02:48:37.633622 | orchestrator | 2025-05-14 02:48:37.633630 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-05-14 02:48:37.633637 | orchestrator | Wednesday 14 May 2025 02:46:34 +0000 (0:00:04.304) 0:01:10.635 ********* 2025-05-14 02:48:37.633644 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-14 02:48:37.633651 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:48:37.633659 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-14 02:48:37.633666 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:48:37.633673 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-14 02:48:37.633680 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:48:37.633688 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-14 02:48:37.633695 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-14 02:48:37.633703 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-14 02:48:37.633710 | orchestrator | 2025-05-14 02:48:37.633718 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-05-14 02:48:37.633727 | orchestrator | Wednesday 14 May 2025 02:46:38 +0000 (0:00:03.301) 0:01:13.937 ********* 2025-05-14 02:48:37.633736 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:48:37.633745 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.633766 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:48:37.633783 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.633792 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:48:37.633801 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.633809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 02:48:37.633821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 02:48:37.633841 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-14 02:48:37.633849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 02:48:37.633858 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-14 02:48:37.633867 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-14 02:48:37.633886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:48:37.633910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.633919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.633928 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-14 02:48:37.633936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:48:37.633945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.633968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.633978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:48:37.633987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.633995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.634004 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-14 02:48:37.634052 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-14 02:48:37.634065 | orchestrator | 2025-05-14 02:48:37.634078 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-05-14 02:48:37.634087 | orchestrator | Wednesday 14 May 2025 02:46:50 +0000 (0:00:11.977) 0:01:25.915 ********* 2025-05-14 02:48:37.634093 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:48:37.634097 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:48:37.634102 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:48:37.634107 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:48:37.634112 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:48:37.634120 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:48:37.634128 | orchestrator | 2025-05-14 02:48:37.634136 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-05-14 02:48:37.634144 | orchestrator | Wednesday 14 May 2025 02:46:53 +0000 (0:00:03.367) 0:01:29.282 ********* 2025-05-14 02:48:37.634152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:48:37.634161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.634170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.634185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.634203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:48:37.634212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.634221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.634229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.634238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:48:37.634270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.634284 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:48:37.634293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.634302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.634310 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:48:37.634318 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:48:37.634327 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:48:37.634343 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.634357 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.634370 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.634379 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:48:37.634387 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:48:37.634395 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.634411 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.634420 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.634428 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:48:37.634446 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:48:37.634455 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.634464 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.634478 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.634486 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:48:37.634494 | orchestrator | 2025-05-14 02:48:37.634502 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-05-14 02:48:37.634510 | orchestrator | Wednesday 14 May 2025 02:46:54 +0000 (0:00:01.273) 0:01:30.556 ********* 2025-05-14 02:48:37.634517 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:48:37.634522 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:48:37.634526 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:48:37.634531 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:48:37.634536 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:48:37.634541 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:48:37.634545 | orchestrator | 2025-05-14 02:48:37.634550 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-05-14 02:48:37.634555 | orchestrator | Wednesday 14 May 2025 02:46:55 +0000 (0:00:00.738) 0:01:31.294 ********* 2025-05-14 02:48:37.634566 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:48:37.634571 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.634576 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:48:37.634586 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.634591 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-14 02:48:37.634596 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.634607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 02:48:37.634612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 02:48:37.634621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-14 02:48:37.634626 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-14 02:48:37.634636 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-14 02:48:37.634642 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-14 02:48:37.634647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:48:37.634656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.634661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.634666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:48:37.634678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.634684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.634694 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-14 02:48:37.634699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:48:37.634704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.634715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-14 02:48:37.634720 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-14 02:48:37.634725 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-14 02:48:37.634735 | orchestrator | 2025-05-14 02:48:37.634740 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-14 02:48:37.634745 | orchestrator | Wednesday 14 May 2025 02:46:58 +0000 (0:00:03.055) 0:01:34.350 ********* 2025-05-14 02:48:37.634750 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:48:37.634754 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:48:37.634759 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:48:37.634764 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:48:37.634768 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:48:37.634773 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:48:37.634778 | orchestrator | 2025-05-14 02:48:37.634783 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-05-14 02:48:37.634787 | orchestrator | Wednesday 14 May 2025 02:46:59 +0000 (0:00:00.661) 0:01:35.011 ********* 2025-05-14 02:48:37.634792 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:48:37.634797 | orchestrator | 2025-05-14 02:48:37.634801 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-05-14 02:48:37.634806 | orchestrator | Wednesday 14 May 2025 02:47:02 +0000 (0:00:03.009) 0:01:38.021 ********* 2025-05-14 02:48:37.634811 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:48:37.634817 | orchestrator | 2025-05-14 02:48:37.634825 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-05-14 02:48:37.634832 | orchestrator | Wednesday 14 May 2025 02:47:05 +0000 (0:00:03.032) 0:01:41.053 ********* 2025-05-14 02:48:37.634837 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:48:37.634842 | orchestrator | 2025-05-14 02:48:37.634846 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-14 02:48:37.634851 | orchestrator | Wednesday 14 May 2025 02:47:26 +0000 (0:00:21.254) 0:02:02.307 ********* 2025-05-14 02:48:37.634856 | orchestrator | 2025-05-14 02:48:37.634860 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-14 02:48:37.634865 | orchestrator | Wednesday 14 May 2025 02:47:26 +0000 (0:00:00.123) 0:02:02.431 ********* 2025-05-14 02:48:37.634870 | orchestrator | 2025-05-14 02:48:37.634874 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-14 02:48:37.634879 | orchestrator | Wednesday 14 May 2025 02:47:27 +0000 (0:00:00.364) 0:02:02.796 ********* 2025-05-14 02:48:37.634884 | orchestrator | 2025-05-14 02:48:37.634888 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-14 02:48:37.634893 | orchestrator | Wednesday 14 May 2025 02:47:27 +0000 (0:00:00.082) 0:02:02.878 ********* 2025-05-14 02:48:37.634898 | orchestrator | 2025-05-14 02:48:37.634902 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-14 02:48:37.634907 | orchestrator | Wednesday 14 May 2025 02:47:27 +0000 (0:00:00.095) 0:02:02.973 ********* 2025-05-14 02:48:37.634912 | orchestrator | 2025-05-14 02:48:37.634917 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-14 02:48:37.634921 | orchestrator | Wednesday 14 May 2025 02:47:27 +0000 (0:00:00.071) 0:02:03.045 ********* 2025-05-14 02:48:37.634926 | orchestrator | 2025-05-14 02:48:37.634931 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-05-14 02:48:37.634935 | orchestrator | Wednesday 14 May 2025 02:47:27 +0000 (0:00:00.351) 0:02:03.396 ********* 2025-05-14 02:48:37.634940 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:48:37.634945 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:48:37.634950 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:48:37.634958 | orchestrator | 2025-05-14 02:48:37.634962 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-05-14 02:48:37.634967 | orchestrator | Wednesday 14 May 2025 02:47:45 +0000 (0:00:17.648) 0:02:21.044 ********* 2025-05-14 02:48:37.634972 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:48:37.634977 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:48:37.634986 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:48:37.634991 | orchestrator | 2025-05-14 02:48:37.634995 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-05-14 02:48:37.635003 | orchestrator | Wednesday 14 May 2025 02:47:56 +0000 (0:00:11.180) 0:02:32.225 ********* 2025-05-14 02:48:37.635008 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:48:37.635013 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:48:37.635018 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:48:37.635022 | orchestrator | 2025-05-14 02:48:37.635027 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-05-14 02:48:37.635032 | orchestrator | Wednesday 14 May 2025 02:48:22 +0000 (0:00:26.111) 0:02:58.336 ********* 2025-05-14 02:48:37.635037 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:48:37.635045 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:48:37.635052 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:48:37.635060 | orchestrator | 2025-05-14 02:48:37.635068 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-05-14 02:48:37.635077 | orchestrator | Wednesday 14 May 2025 02:48:36 +0000 (0:00:13.574) 0:03:11.910 ********* 2025-05-14 02:48:37.635082 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:48:37.635087 | orchestrator | 2025-05-14 02:48:37.635091 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:48:37.635096 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-14 02:48:37.635102 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-14 02:48:37.635106 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-14 02:48:37.635111 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-14 02:48:37.635116 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-14 02:48:37.635121 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-14 02:48:37.635125 | orchestrator | 2025-05-14 02:48:37.635130 | orchestrator | 2025-05-14 02:48:37.635135 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:48:37.635140 | orchestrator | Wednesday 14 May 2025 02:48:36 +0000 (0:00:00.591) 0:03:12.502 ********* 2025-05-14 02:48:37.635145 | orchestrator | =============================================================================== 2025-05-14 02:48:37.635149 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 26.11s 2025-05-14 02:48:37.635154 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 21.25s 2025-05-14 02:48:37.635159 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 17.65s 2025-05-14 02:48:37.635164 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 13.57s 2025-05-14 02:48:37.635169 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 11.98s 2025-05-14 02:48:37.635173 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 11.18s 2025-05-14 02:48:37.635178 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 9.02s 2025-05-14 02:48:37.635187 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.78s 2025-05-14 02:48:37.635192 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.99s 2025-05-14 02:48:37.635199 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.30s 2025-05-14 02:48:37.635207 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 4.22s 2025-05-14 02:48:37.635214 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.14s 2025-05-14 02:48:37.635222 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.96s 2025-05-14 02:48:37.635229 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 3.91s 2025-05-14 02:48:37.635237 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 3.89s 2025-05-14 02:48:37.635271 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.53s 2025-05-14 02:48:37.635279 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.47s 2025-05-14 02:48:37.635287 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.43s 2025-05-14 02:48:37.635295 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 3.37s 2025-05-14 02:48:37.635303 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 3.30s 2025-05-14 02:48:37.635310 | orchestrator | 2025-05-14 02:48:37 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:48:37.635319 | orchestrator | 2025-05-14 02:48:37 | INFO  | Task 7b74e9fe-277f-43b5-9289-ced26c5f6132 is in state STARTED 2025-05-14 02:48:37.635326 | orchestrator | 2025-05-14 02:48:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:48:40.680984 | orchestrator | 2025-05-14 02:48:40 | INFO  | Task e9a1eaeb-2b70-49ca-9043-901cbaa597dc is in state STARTED 2025-05-14 02:48:40.682930 | orchestrator | 2025-05-14 02:48:40 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:48:40.684357 | orchestrator | 2025-05-14 02:48:40 | INFO  | Task d46c0cae-8878-4a3f-884a-50ba0e2452d4 is in state SUCCESS 2025-05-14 02:48:40.686558 | orchestrator | 2025-05-14 02:48:40.686625 | orchestrator | 2025-05-14 02:48:40.686637 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:48:40.686647 | orchestrator | 2025-05-14 02:48:40.686655 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:48:40.686664 | orchestrator | Wednesday 14 May 2025 02:45:11 +0000 (0:00:00.535) 0:00:00.535 ********* 2025-05-14 02:48:40.686672 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:48:40.686680 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:48:40.686688 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:48:40.686696 | orchestrator | 2025-05-14 02:48:40.686704 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:48:40.686712 | orchestrator | Wednesday 14 May 2025 02:45:11 +0000 (0:00:00.487) 0:00:01.022 ********* 2025-05-14 02:48:40.686720 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-05-14 02:48:40.686728 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-05-14 02:48:40.686736 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-05-14 02:48:40.686744 | orchestrator | 2025-05-14 02:48:40.686752 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-05-14 02:48:40.686761 | orchestrator | 2025-05-14 02:48:40.686775 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-14 02:48:40.686788 | orchestrator | Wednesday 14 May 2025 02:45:11 +0000 (0:00:00.280) 0:00:01.303 ********* 2025-05-14 02:48:40.686801 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:48:40.686814 | orchestrator | 2025-05-14 02:48:40.686828 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-05-14 02:48:40.687110 | orchestrator | Wednesday 14 May 2025 02:45:12 +0000 (0:00:00.652) 0:00:01.956 ********* 2025-05-14 02:48:40.687122 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-05-14 02:48:40.687130 | orchestrator | 2025-05-14 02:48:40.687138 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-05-14 02:48:40.687146 | orchestrator | Wednesday 14 May 2025 02:45:16 +0000 (0:00:03.966) 0:00:05.923 ********* 2025-05-14 02:48:40.687154 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-05-14 02:48:40.687162 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-05-14 02:48:40.687170 | orchestrator | 2025-05-14 02:48:40.687177 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-05-14 02:48:40.687186 | orchestrator | Wednesday 14 May 2025 02:45:23 +0000 (0:00:07.391) 0:00:13.314 ********* 2025-05-14 02:48:40.687193 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-14 02:48:40.687202 | orchestrator | 2025-05-14 02:48:40.687210 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-05-14 02:48:40.687217 | orchestrator | Wednesday 14 May 2025 02:45:27 +0000 (0:00:03.393) 0:00:16.708 ********* 2025-05-14 02:48:40.687225 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-14 02:48:40.687233 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-05-14 02:48:40.687266 | orchestrator | 2025-05-14 02:48:40.687275 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-05-14 02:48:40.687283 | orchestrator | Wednesday 14 May 2025 02:45:31 +0000 (0:00:04.100) 0:00:20.809 ********* 2025-05-14 02:48:40.687290 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-14 02:48:40.687298 | orchestrator | 2025-05-14 02:48:40.687306 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-05-14 02:48:40.687314 | orchestrator | Wednesday 14 May 2025 02:45:34 +0000 (0:00:03.512) 0:00:24.321 ********* 2025-05-14 02:48:40.687322 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-05-14 02:48:40.687330 | orchestrator | 2025-05-14 02:48:40.687337 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-05-14 02:48:40.687345 | orchestrator | Wednesday 14 May 2025 02:45:39 +0000 (0:00:04.213) 0:00:28.535 ********* 2025-05-14 02:48:40.687385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-14 02:48:40.687409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-14 02:48:40.687423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-14 02:48:40.687443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-14 02:48:40.687459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-14 02:48:40.687479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-14 02:48:40.687494 | orchestrator | 2025-05-14 02:48:40.687502 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-14 02:48:40.687510 | orchestrator | Wednesday 14 May 2025 02:45:44 +0000 (0:00:05.856) 0:00:34.392 ********* 2025-05-14 02:48:40.687518 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:48:40.687526 | orchestrator | 2025-05-14 02:48:40.687534 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-05-14 02:48:40.687542 | orchestrator | Wednesday 14 May 2025 02:45:45 +0000 (0:00:00.653) 0:00:35.046 ********* 2025-05-14 02:48:40.687549 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:48:40.687557 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:48:40.687565 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:48:40.687573 | orchestrator | 2025-05-14 02:48:40.687580 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-05-14 02:48:40.687588 | orchestrator | Wednesday 14 May 2025 02:45:51 +0000 (0:00:06.301) 0:00:41.348 ********* 2025-05-14 02:48:40.687596 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-14 02:48:40.687604 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-14 02:48:40.687612 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-14 02:48:40.687620 | orchestrator | 2025-05-14 02:48:40.687627 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-05-14 02:48:40.687635 | orchestrator | Wednesday 14 May 2025 02:45:53 +0000 (0:00:01.709) 0:00:43.057 ********* 2025-05-14 02:48:40.687643 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-14 02:48:40.687651 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-14 02:48:40.687658 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-14 02:48:40.687666 | orchestrator | 2025-05-14 02:48:40.687674 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-05-14 02:48:40.687681 | orchestrator | Wednesday 14 May 2025 02:45:54 +0000 (0:00:01.198) 0:00:44.256 ********* 2025-05-14 02:48:40.687689 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:48:40.687697 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:48:40.687705 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:48:40.687712 | orchestrator | 2025-05-14 02:48:40.687720 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-05-14 02:48:40.687728 | orchestrator | Wednesday 14 May 2025 02:45:55 +0000 (0:00:00.733) 0:00:44.990 ********* 2025-05-14 02:48:40.687736 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:48:40.687749 | orchestrator | 2025-05-14 02:48:40.687756 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-05-14 02:48:40.687764 | orchestrator | Wednesday 14 May 2025 02:45:55 +0000 (0:00:00.087) 0:00:45.077 ********* 2025-05-14 02:48:40.687772 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:48:40.687779 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:48:40.687787 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:48:40.687795 | orchestrator | 2025-05-14 02:48:40.687803 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-14 02:48:40.687811 | orchestrator | Wednesday 14 May 2025 02:45:55 +0000 (0:00:00.325) 0:00:45.403 ********* 2025-05-14 02:48:40.687822 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:48:40.687830 | orchestrator | 2025-05-14 02:48:40.687838 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-05-14 02:48:40.687846 | orchestrator | Wednesday 14 May 2025 02:45:56 +0000 (0:00:00.596) 0:00:45.999 ********* 2025-05-14 02:48:40.687860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-14 02:48:40.687870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-14 02:48:40.687895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-14 02:48:40.687905 | orchestrator | 2025-05-14 02:48:40.687913 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-05-14 02:48:40.687921 | orchestrator | Wednesday 14 May 2025 02:46:01 +0000 (0:00:05.461) 0:00:51.461 ********* 2025-05-14 02:48:40.687929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-14 02:48:40.687948 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:48:40.687966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-14 02:48:40.687976 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:48:40.687984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-14 02:48:40.687993 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:48:40.688001 | orchestrator | 2025-05-14 02:48:40.688009 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-05-14 02:48:40.688017 | orchestrator | Wednesday 14 May 2025 02:46:07 +0000 (0:00:05.144) 0:00:56.605 ********* 2025-05-14 02:48:40.688040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-14 02:48:40.688050 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:48:40.688059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-14 02:48:40.688067 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:48:40.688076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-14 02:48:40.688089 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:48:40.688097 | orchestrator | 2025-05-14 02:48:40.688108 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-05-14 02:48:40.688116 | orchestrator | Wednesday 14 May 2025 02:46:15 +0000 (0:00:08.522) 0:01:05.128 ********* 2025-05-14 02:48:40.688124 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:48:40.688131 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:48:40.688139 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:48:40.688147 | orchestrator | 2025-05-14 02:48:40.688158 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-05-14 02:48:40.688167 | orchestrator | Wednesday 14 May 2025 02:46:22 +0000 (0:00:06.486) 0:01:11.615 ********* 2025-05-14 02:48:40.688287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-14 02:48:40.688301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-14 02:48:40.688331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-14 02:48:40.688341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-14 02:48:40.688365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-14 02:48:40.688376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-14 02:48:40.688389 | orchestrator | 2025-05-14 02:48:40.688397 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-05-14 02:48:40.688406 | orchestrator | Wednesday 14 May 2025 02:46:27 +0000 (0:00:05.155) 0:01:16.770 ********* 2025-05-14 02:48:40.688413 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:48:40.688421 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:48:40.688430 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:48:40.688437 | orchestrator | 2025-05-14 02:48:40.688445 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-05-14 02:48:40.688453 | orchestrator | Wednesday 14 May 2025 02:46:40 +0000 (0:00:13.493) 0:01:30.263 ********* 2025-05-14 02:48:40.688462 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:48:40.688470 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:48:40.688478 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:48:40.688485 | orchestrator | 2025-05-14 02:48:40.688493 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-05-14 02:48:40.688501 | orchestrator | Wednesday 14 May 2025 02:46:54 +0000 (0:00:13.279) 0:01:43.543 ********* 2025-05-14 02:48:40.688509 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:48:40.688517 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:48:40.688525 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:48:40.688532 | orchestrator | 2025-05-14 02:48:40.688540 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-05-14 02:48:40.688548 | orchestrator | Wednesday 14 May 2025 02:47:01 +0000 (0:00:06.942) 0:01:50.485 ********* 2025-05-14 02:48:40.688556 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:48:40.688568 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:48:40.688576 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:48:40.688584 | orchestrator | 2025-05-14 02:48:40.688592 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-05-14 02:48:40.688600 | orchestrator | Wednesday 14 May 2025 02:47:12 +0000 (0:00:11.201) 0:02:01.687 ********* 2025-05-14 02:48:40.688608 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:48:40.688620 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:48:40.688628 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:48:40.688636 | orchestrator | 2025-05-14 02:48:40.688645 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-05-14 02:48:40.688653 | orchestrator | Wednesday 14 May 2025 02:47:19 +0000 (0:00:07.288) 0:02:08.975 ********* 2025-05-14 02:48:40.688661 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:48:40.688669 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:48:40.688676 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:48:40.688684 | orchestrator | 2025-05-14 02:48:40.688692 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-05-14 02:48:40.688700 | orchestrator | Wednesday 14 May 2025 02:47:19 +0000 (0:00:00.281) 0:02:09.256 ********* 2025-05-14 02:48:40.688708 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-14 02:48:40.688721 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:48:40.688734 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-14 02:48:40.688754 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:48:40.688767 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-14 02:48:40.688780 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:48:40.688793 | orchestrator | 2025-05-14 02:48:40.688807 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-05-14 02:48:40.688820 | orchestrator | Wednesday 14 May 2025 02:47:23 +0000 (0:00:03.999) 0:02:13.256 ********* 2025-05-14 02:48:40.688835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-14 02:48:40.688866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-14 02:48:40.688894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-14 02:48:40.688925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-14 02:48:40.688937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-14 02:48:40.688954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-14 02:48:40.688964 | orchestrator | 2025-05-14 02:48:40.688978 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-14 02:48:40.688987 | orchestrator | Wednesday 14 May 2025 02:47:28 +0000 (0:00:04.801) 0:02:18.057 ********* 2025-05-14 02:48:40.688996 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:48:40.689005 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:48:40.689014 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:48:40.689024 | orchestrator | 2025-05-14 02:48:40.689037 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-05-14 02:48:40.689046 | orchestrator | Wednesday 14 May 2025 02:47:29 +0000 (0:00:00.629) 0:02:18.687 ********* 2025-05-14 02:48:40.689061 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:48:40.689070 | orchestrator | 2025-05-14 02:48:40.689080 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-05-14 02:48:40.689089 | orchestrator | Wednesday 14 May 2025 02:47:31 +0000 (0:00:02.761) 0:02:21.448 ********* 2025-05-14 02:48:40.689098 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:48:40.689107 | orchestrator | 2025-05-14 02:48:40.689118 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-05-14 02:48:40.689133 | orchestrator | Wednesday 14 May 2025 02:47:34 +0000 (0:00:02.590) 0:02:24.039 ********* 2025-05-14 02:48:40.689146 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:48:40.689158 | orchestrator | 2025-05-14 02:48:40.689171 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-05-14 02:48:40.689184 | orchestrator | Wednesday 14 May 2025 02:47:36 +0000 (0:00:02.287) 0:02:26.326 ********* 2025-05-14 02:48:40.689197 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:48:40.689210 | orchestrator | 2025-05-14 02:48:40.689223 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-05-14 02:48:40.689286 | orchestrator | Wednesday 14 May 2025 02:48:02 +0000 (0:00:25.921) 0:02:52.248 ********* 2025-05-14 02:48:40.689303 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:48:40.689317 | orchestrator | 2025-05-14 02:48:40.689332 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-14 02:48:40.689347 | orchestrator | Wednesday 14 May 2025 02:48:05 +0000 (0:00:02.295) 0:02:54.543 ********* 2025-05-14 02:48:40.689362 | orchestrator | 2025-05-14 02:48:40.689386 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-14 02:48:40.689401 | orchestrator | Wednesday 14 May 2025 02:48:05 +0000 (0:00:00.097) 0:02:54.640 ********* 2025-05-14 02:48:40.689422 | orchestrator | 2025-05-14 02:48:40.689438 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-14 02:48:40.689459 | orchestrator | Wednesday 14 May 2025 02:48:05 +0000 (0:00:00.107) 0:02:54.747 ********* 2025-05-14 02:48:40.689476 | orchestrator | 2025-05-14 02:48:40.689495 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-05-14 02:48:40.689512 | orchestrator | Wednesday 14 May 2025 02:48:05 +0000 (0:00:00.303) 0:02:55.051 ********* 2025-05-14 02:48:40.689528 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:48:40.689691 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:48:40.689708 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:48:40.689718 | orchestrator | 2025-05-14 02:48:40.689728 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:48:40.689739 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-05-14 02:48:40.689751 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-14 02:48:40.689761 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-14 02:48:40.689770 | orchestrator | 2025-05-14 02:48:40.689780 | orchestrator | 2025-05-14 02:48:40.689791 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:48:40.689801 | orchestrator | Wednesday 14 May 2025 02:48:38 +0000 (0:00:33.076) 0:03:28.127 ********* 2025-05-14 02:48:40.689811 | orchestrator | =============================================================================== 2025-05-14 02:48:40.689822 | orchestrator | glance : Restart glance-api container ---------------------------------- 33.08s 2025-05-14 02:48:40.689831 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 25.92s 2025-05-14 02:48:40.689841 | orchestrator | glance : Copying over glance-api.conf ---------------------------------- 13.49s 2025-05-14 02:48:40.689851 | orchestrator | glance : Copying over glance-cache.conf for glance_api ----------------- 13.28s 2025-05-14 02:48:40.689875 | orchestrator | glance : Copying over glance-image-import.conf ------------------------- 11.20s 2025-05-14 02:48:40.689885 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 8.52s 2025-05-14 02:48:40.689895 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 7.39s 2025-05-14 02:48:40.689906 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 7.29s 2025-05-14 02:48:40.689916 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 6.94s 2025-05-14 02:48:40.689926 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 6.49s 2025-05-14 02:48:40.689936 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 6.30s 2025-05-14 02:48:40.689946 | orchestrator | glance : Ensuring config directories exist ------------------------------ 5.86s 2025-05-14 02:48:40.689956 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 5.46s 2025-05-14 02:48:40.689965 | orchestrator | glance : Copying over config.json files for services -------------------- 5.16s 2025-05-14 02:48:40.689975 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 5.14s 2025-05-14 02:48:40.689985 | orchestrator | glance : Check glance containers ---------------------------------------- 4.80s 2025-05-14 02:48:40.690002 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.21s 2025-05-14 02:48:40.690077 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.10s 2025-05-14 02:48:40.690091 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 4.00s 2025-05-14 02:48:40.690111 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.97s 2025-05-14 02:48:40.690122 | orchestrator | 2025-05-14 02:48:40 | INFO  | Task c195aab9-f9c1-4686-a988-b8f2a7cef96a is in state STARTED 2025-05-14 02:48:40.690133 | orchestrator | 2025-05-14 02:48:40 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:48:40.690143 | orchestrator | 2025-05-14 02:48:40 | INFO  | Task 7b74e9fe-277f-43b5-9289-ced26c5f6132 is in state STARTED 2025-05-14 02:48:40.690154 | orchestrator | 2025-05-14 02:48:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:48:43.751284 | orchestrator | 2025-05-14 02:48:43 | INFO  | Task e9a1eaeb-2b70-49ca-9043-901cbaa597dc is in state STARTED 2025-05-14 02:48:43.751363 | orchestrator | 2025-05-14 02:48:43 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:48:43.751532 | orchestrator | 2025-05-14 02:48:43 | INFO  | Task c195aab9-f9c1-4686-a988-b8f2a7cef96a is in state STARTED 2025-05-14 02:48:43.752426 | orchestrator | 2025-05-14 02:48:43 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:48:43.753292 | orchestrator | 2025-05-14 02:48:43 | INFO  | Task 7b74e9fe-277f-43b5-9289-ced26c5f6132 is in state STARTED 2025-05-14 02:48:43.753335 | orchestrator | 2025-05-14 02:48:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:48:46.812717 | orchestrator | 2025-05-14 02:48:46 | INFO  | Task e9a1eaeb-2b70-49ca-9043-901cbaa597dc is in state STARTED 2025-05-14 02:48:46.814288 | orchestrator | 2025-05-14 02:48:46 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:48:46.818582 | orchestrator | 2025-05-14 02:48:46 | INFO  | Task c195aab9-f9c1-4686-a988-b8f2a7cef96a is in state STARTED 2025-05-14 02:48:46.820044 | orchestrator | 2025-05-14 02:48:46 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:48:46.822186 | orchestrator | 2025-05-14 02:48:46 | INFO  | Task 7b74e9fe-277f-43b5-9289-ced26c5f6132 is in state STARTED 2025-05-14 02:48:46.822315 | orchestrator | 2025-05-14 02:48:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:48:49.866209 | orchestrator | 2025-05-14 02:48:49 | INFO  | Task e9a1eaeb-2b70-49ca-9043-901cbaa597dc is in state STARTED 2025-05-14 02:48:49.869053 | orchestrator | 2025-05-14 02:48:49 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:48:49.869107 | orchestrator | 2025-05-14 02:48:49 | INFO  | Task c195aab9-f9c1-4686-a988-b8f2a7cef96a is in state STARTED 2025-05-14 02:48:49.870182 | orchestrator | 2025-05-14 02:48:49 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:48:49.871563 | orchestrator | 2025-05-14 02:48:49 | INFO  | Task 7b74e9fe-277f-43b5-9289-ced26c5f6132 is in state STARTED 2025-05-14 02:48:49.871602 | orchestrator | 2025-05-14 02:48:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:48:52.927735 | orchestrator | 2025-05-14 02:48:52 | INFO  | Task e9a1eaeb-2b70-49ca-9043-901cbaa597dc is in state STARTED 2025-05-14 02:48:52.930484 | orchestrator | 2025-05-14 02:48:52 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:48:52.932994 | orchestrator | 2025-05-14 02:48:52 | INFO  | Task c195aab9-f9c1-4686-a988-b8f2a7cef96a is in state STARTED 2025-05-14 02:48:52.934711 | orchestrator | 2025-05-14 02:48:52 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:48:52.936489 | orchestrator | 2025-05-14 02:48:52 | INFO  | Task 7b74e9fe-277f-43b5-9289-ced26c5f6132 is in state STARTED 2025-05-14 02:48:52.936541 | orchestrator | 2025-05-14 02:48:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:48:55.969327 | orchestrator | 2025-05-14 02:48:55 | INFO  | Task e9a1eaeb-2b70-49ca-9043-901cbaa597dc is in state STARTED 2025-05-14 02:48:55.970503 | orchestrator | 2025-05-14 02:48:55 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:48:55.972625 | orchestrator | 2025-05-14 02:48:55 | INFO  | Task c195aab9-f9c1-4686-a988-b8f2a7cef96a is in state STARTED 2025-05-14 02:48:55.974133 | orchestrator | 2025-05-14 02:48:55 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:48:55.975545 | orchestrator | 2025-05-14 02:48:55 | INFO  | Task 7b74e9fe-277f-43b5-9289-ced26c5f6132 is in state STARTED 2025-05-14 02:48:55.975582 | orchestrator | 2025-05-14 02:48:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:48:59.023595 | orchestrator | 2025-05-14 02:48:59 | INFO  | Task e9a1eaeb-2b70-49ca-9043-901cbaa597dc is in state STARTED 2025-05-14 02:48:59.024548 | orchestrator | 2025-05-14 02:48:59 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:48:59.025983 | orchestrator | 2025-05-14 02:48:59 | INFO  | Task c195aab9-f9c1-4686-a988-b8f2a7cef96a is in state STARTED 2025-05-14 02:48:59.027423 | orchestrator | 2025-05-14 02:48:59 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:48:59.028525 | orchestrator | 2025-05-14 02:48:59 | INFO  | Task 7b74e9fe-277f-43b5-9289-ced26c5f6132 is in state STARTED 2025-05-14 02:48:59.028571 | orchestrator | 2025-05-14 02:48:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:49:02.075724 | orchestrator | 2025-05-14 02:49:02 | INFO  | Task e9a1eaeb-2b70-49ca-9043-901cbaa597dc is in state STARTED 2025-05-14 02:49:02.078386 | orchestrator | 2025-05-14 02:49:02 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:49:02.081456 | orchestrator | 2025-05-14 02:49:02 | INFO  | Task c195aab9-f9c1-4686-a988-b8f2a7cef96a is in state STARTED 2025-05-14 02:49:02.083378 | orchestrator | 2025-05-14 02:49:02 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:49:02.085792 | orchestrator | 2025-05-14 02:49:02 | INFO  | Task 7b74e9fe-277f-43b5-9289-ced26c5f6132 is in state STARTED 2025-05-14 02:49:02.085837 | orchestrator | 2025-05-14 02:49:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:49:05.115069 | orchestrator | 2025-05-14 02:49:05 | INFO  | Task e9a1eaeb-2b70-49ca-9043-901cbaa597dc is in state STARTED 2025-05-14 02:49:05.116492 | orchestrator | 2025-05-14 02:49:05 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:49:05.117908 | orchestrator | 2025-05-14 02:49:05 | INFO  | Task c195aab9-f9c1-4686-a988-b8f2a7cef96a is in state STARTED 2025-05-14 02:49:05.119386 | orchestrator | 2025-05-14 02:49:05 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:49:05.120583 | orchestrator | 2025-05-14 02:49:05 | INFO  | Task 7b74e9fe-277f-43b5-9289-ced26c5f6132 is in state STARTED 2025-05-14 02:49:05.120613 | orchestrator | 2025-05-14 02:49:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:49:08.159599 | orchestrator | 2025-05-14 02:49:08 | INFO  | Task e9a1eaeb-2b70-49ca-9043-901cbaa597dc is in state STARTED 2025-05-14 02:49:08.160014 | orchestrator | 2025-05-14 02:49:08 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:49:08.160866 | orchestrator | 2025-05-14 02:49:08 | INFO  | Task c195aab9-f9c1-4686-a988-b8f2a7cef96a is in state STARTED 2025-05-14 02:49:08.162097 | orchestrator | 2025-05-14 02:49:08 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:49:08.162727 | orchestrator | 2025-05-14 02:49:08 | INFO  | Task 7b74e9fe-277f-43b5-9289-ced26c5f6132 is in state STARTED 2025-05-14 02:49:08.162777 | orchestrator | 2025-05-14 02:49:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:49:11.216029 | orchestrator | 2025-05-14 02:49:11 | INFO  | Task e9a1eaeb-2b70-49ca-9043-901cbaa597dc is in state STARTED 2025-05-14 02:49:11.216792 | orchestrator | 2025-05-14 02:49:11 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:49:11.220284 | orchestrator | 2025-05-14 02:49:11 | INFO  | Task c195aab9-f9c1-4686-a988-b8f2a7cef96a is in state STARTED 2025-05-14 02:49:11.221836 | orchestrator | 2025-05-14 02:49:11 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:49:11.223689 | orchestrator | 2025-05-14 02:49:11 | INFO  | Task 7b74e9fe-277f-43b5-9289-ced26c5f6132 is in state STARTED 2025-05-14 02:49:11.223736 | orchestrator | 2025-05-14 02:49:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:49:14.276042 | orchestrator | 2025-05-14 02:49:14 | INFO  | Task e9a1eaeb-2b70-49ca-9043-901cbaa597dc is in state STARTED 2025-05-14 02:49:14.278440 | orchestrator | 2025-05-14 02:49:14 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:49:14.280970 | orchestrator | 2025-05-14 02:49:14 | INFO  | Task c195aab9-f9c1-4686-a988-b8f2a7cef96a is in state STARTED 2025-05-14 02:49:14.282741 | orchestrator | 2025-05-14 02:49:14 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:49:14.282987 | orchestrator | 2025-05-14 02:49:14 | INFO  | Task 7b74e9fe-277f-43b5-9289-ced26c5f6132 is in state STARTED 2025-05-14 02:49:14.283063 | orchestrator | 2025-05-14 02:49:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:49:17.344748 | orchestrator | 2025-05-14 02:49:17 | INFO  | Task e9a1eaeb-2b70-49ca-9043-901cbaa597dc is in state STARTED 2025-05-14 02:49:17.345724 | orchestrator | 2025-05-14 02:49:17 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:49:17.347445 | orchestrator | 2025-05-14 02:49:17 | INFO  | Task c195aab9-f9c1-4686-a988-b8f2a7cef96a is in state STARTED 2025-05-14 02:49:17.349580 | orchestrator | 2025-05-14 02:49:17 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:49:17.351528 | orchestrator | 2025-05-14 02:49:17 | INFO  | Task 7b74e9fe-277f-43b5-9289-ced26c5f6132 is in state STARTED 2025-05-14 02:49:17.351599 | orchestrator | 2025-05-14 02:49:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:49:20.390317 | orchestrator | 2025-05-14 02:49:20 | INFO  | Task e9a1eaeb-2b70-49ca-9043-901cbaa597dc is in state STARTED 2025-05-14 02:49:20.390416 | orchestrator | 2025-05-14 02:49:20 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:49:20.390854 | orchestrator | 2025-05-14 02:49:20 | INFO  | Task c195aab9-f9c1-4686-a988-b8f2a7cef96a is in state STARTED 2025-05-14 02:49:20.391774 | orchestrator | 2025-05-14 02:49:20 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:49:20.392721 | orchestrator | 2025-05-14 02:49:20 | INFO  | Task 7b74e9fe-277f-43b5-9289-ced26c5f6132 is in state STARTED 2025-05-14 02:49:20.392738 | orchestrator | 2025-05-14 02:49:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:49:23.443187 | orchestrator | 2025-05-14 02:49:23 | INFO  | Task e9a1eaeb-2b70-49ca-9043-901cbaa597dc is in state STARTED 2025-05-14 02:49:23.446276 | orchestrator | 2025-05-14 02:49:23 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:49:23.448565 | orchestrator | 2025-05-14 02:49:23 | INFO  | Task c195aab9-f9c1-4686-a988-b8f2a7cef96a is in state STARTED 2025-05-14 02:49:23.450511 | orchestrator | 2025-05-14 02:49:23 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:49:23.451844 | orchestrator | 2025-05-14 02:49:23 | INFO  | Task 7b74e9fe-277f-43b5-9289-ced26c5f6132 is in state STARTED 2025-05-14 02:49:23.451875 | orchestrator | 2025-05-14 02:49:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:49:26.504072 | orchestrator | 2025-05-14 02:49:26 | INFO  | Task e9a1eaeb-2b70-49ca-9043-901cbaa597dc is in state STARTED 2025-05-14 02:49:26.505433 | orchestrator | 2025-05-14 02:49:26 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:49:26.507190 | orchestrator | 2025-05-14 02:49:26 | INFO  | Task c195aab9-f9c1-4686-a988-b8f2a7cef96a is in state STARTED 2025-05-14 02:49:26.508693 | orchestrator | 2025-05-14 02:49:26 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:49:26.509891 | orchestrator | 2025-05-14 02:49:26 | INFO  | Task 7b74e9fe-277f-43b5-9289-ced26c5f6132 is in state STARTED 2025-05-14 02:49:26.509924 | orchestrator | 2025-05-14 02:49:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:49:29.546676 | orchestrator | 2025-05-14 02:49:29 | INFO  | Task e9a1eaeb-2b70-49ca-9043-901cbaa597dc is in state STARTED 2025-05-14 02:49:29.547727 | orchestrator | 2025-05-14 02:49:29 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:49:29.549646 | orchestrator | 2025-05-14 02:49:29 | INFO  | Task c195aab9-f9c1-4686-a988-b8f2a7cef96a is in state STARTED 2025-05-14 02:49:29.551072 | orchestrator | 2025-05-14 02:49:29 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:49:29.552529 | orchestrator | 2025-05-14 02:49:29 | INFO  | Task 7b74e9fe-277f-43b5-9289-ced26c5f6132 is in state STARTED 2025-05-14 02:49:29.552681 | orchestrator | 2025-05-14 02:49:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:49:32.597684 | orchestrator | 2025-05-14 02:49:32 | INFO  | Task e9a1eaeb-2b70-49ca-9043-901cbaa597dc is in state STARTED 2025-05-14 02:49:32.597818 | orchestrator | 2025-05-14 02:49:32 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:49:32.598380 | orchestrator | 2025-05-14 02:49:32 | INFO  | Task c195aab9-f9c1-4686-a988-b8f2a7cef96a is in state STARTED 2025-05-14 02:49:32.598774 | orchestrator | 2025-05-14 02:49:32 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:49:32.599737 | orchestrator | 2025-05-14 02:49:32 | INFO  | Task 7b74e9fe-277f-43b5-9289-ced26c5f6132 is in state STARTED 2025-05-14 02:49:32.599772 | orchestrator | 2025-05-14 02:49:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:49:35.665333 | orchestrator | 2025-05-14 02:49:35 | INFO  | Task e9a1eaeb-2b70-49ca-9043-901cbaa597dc is in state STARTED 2025-05-14 02:49:35.667445 | orchestrator | 2025-05-14 02:49:35 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:49:35.669387 | orchestrator | 2025-05-14 02:49:35 | INFO  | Task c195aab9-f9c1-4686-a988-b8f2a7cef96a is in state STARTED 2025-05-14 02:49:35.671147 | orchestrator | 2025-05-14 02:49:35 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:49:35.673688 | orchestrator | 2025-05-14 02:49:35 | INFO  | Task 7b74e9fe-277f-43b5-9289-ced26c5f6132 is in state STARTED 2025-05-14 02:49:35.673731 | orchestrator | 2025-05-14 02:49:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:49:38.716284 | orchestrator | 2025-05-14 02:49:38 | INFO  | Task e9a1eaeb-2b70-49ca-9043-901cbaa597dc is in state SUCCESS 2025-05-14 02:49:38.716384 | orchestrator | 2025-05-14 02:49:38 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:49:38.718005 | orchestrator | 2025-05-14 02:49:38 | INFO  | Task c195aab9-f9c1-4686-a988-b8f2a7cef96a is in state STARTED 2025-05-14 02:49:38.719632 | orchestrator | 2025-05-14 02:49:38 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:49:38.721179 | orchestrator | 2025-05-14 02:49:38 | INFO  | Task 7b74e9fe-277f-43b5-9289-ced26c5f6132 is in state STARTED 2025-05-14 02:49:38.721241 | orchestrator | 2025-05-14 02:49:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:49:41.773998 | orchestrator | 2025-05-14 02:49:41 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:49:41.776311 | orchestrator | 2025-05-14 02:49:41 | INFO  | Task c195aab9-f9c1-4686-a988-b8f2a7cef96a is in state STARTED 2025-05-14 02:49:41.778972 | orchestrator | 2025-05-14 02:49:41 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:49:41.780922 | orchestrator | 2025-05-14 02:49:41 | INFO  | Task 7b74e9fe-277f-43b5-9289-ced26c5f6132 is in state STARTED 2025-05-14 02:49:41.781089 | orchestrator | 2025-05-14 02:49:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:49:44.826909 | orchestrator | 2025-05-14 02:49:44 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:49:44.828670 | orchestrator | 2025-05-14 02:49:44 | INFO  | Task c195aab9-f9c1-4686-a988-b8f2a7cef96a is in state STARTED 2025-05-14 02:49:44.831599 | orchestrator | 2025-05-14 02:49:44 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:49:44.833910 | orchestrator | 2025-05-14 02:49:44 | INFO  | Task 7b74e9fe-277f-43b5-9289-ced26c5f6132 is in state STARTED 2025-05-14 02:49:44.834388 | orchestrator | 2025-05-14 02:49:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:49:47.878395 | orchestrator | 2025-05-14 02:49:47 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:49:47.879599 | orchestrator | 2025-05-14 02:49:47 | INFO  | Task c195aab9-f9c1-4686-a988-b8f2a7cef96a is in state STARTED 2025-05-14 02:49:47.881088 | orchestrator | 2025-05-14 02:49:47 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:49:47.882593 | orchestrator | 2025-05-14 02:49:47 | INFO  | Task 7b74e9fe-277f-43b5-9289-ced26c5f6132 is in state STARTED 2025-05-14 02:49:47.882624 | orchestrator | 2025-05-14 02:49:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:49:50.934554 | orchestrator | 2025-05-14 02:49:50 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:49:50.936037 | orchestrator | 2025-05-14 02:49:50 | INFO  | Task c195aab9-f9c1-4686-a988-b8f2a7cef96a is in state STARTED 2025-05-14 02:49:50.937835 | orchestrator | 2025-05-14 02:49:50 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:49:50.940025 | orchestrator | 2025-05-14 02:49:50 | INFO  | Task 7b74e9fe-277f-43b5-9289-ced26c5f6132 is in state STARTED 2025-05-14 02:49:50.940075 | orchestrator | 2025-05-14 02:49:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:49:53.989279 | orchestrator | 2025-05-14 02:49:53 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:49:53.991186 | orchestrator | 2025-05-14 02:49:53 | INFO  | Task c195aab9-f9c1-4686-a988-b8f2a7cef96a is in state STARTED 2025-05-14 02:49:53.993163 | orchestrator | 2025-05-14 02:49:53 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:49:53.994599 | orchestrator | 2025-05-14 02:49:53 | INFO  | Task 7b74e9fe-277f-43b5-9289-ced26c5f6132 is in state STARTED 2025-05-14 02:49:53.994644 | orchestrator | 2025-05-14 02:49:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:49:57.051889 | orchestrator | 2025-05-14 02:49:57 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:49:57.054257 | orchestrator | 2025-05-14 02:49:57 | INFO  | Task c195aab9-f9c1-4686-a988-b8f2a7cef96a is in state STARTED 2025-05-14 02:49:57.056098 | orchestrator | 2025-05-14 02:49:57 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:49:57.057479 | orchestrator | 2025-05-14 02:49:57 | INFO  | Task 7b74e9fe-277f-43b5-9289-ced26c5f6132 is in state STARTED 2025-05-14 02:49:57.057531 | orchestrator | 2025-05-14 02:49:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:50:00.109673 | orchestrator | 2025-05-14 02:50:00 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:50:00.110754 | orchestrator | 2025-05-14 02:50:00 | INFO  | Task c195aab9-f9c1-4686-a988-b8f2a7cef96a is in state STARTED 2025-05-14 02:50:00.113112 | orchestrator | 2025-05-14 02:50:00 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:50:00.115094 | orchestrator | 2025-05-14 02:50:00 | INFO  | Task 7b74e9fe-277f-43b5-9289-ced26c5f6132 is in state STARTED 2025-05-14 02:50:00.115261 | orchestrator | 2025-05-14 02:50:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:50:03.155640 | orchestrator | 2025-05-14 02:50:03 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:50:03.156359 | orchestrator | 2025-05-14 02:50:03 | INFO  | Task c195aab9-f9c1-4686-a988-b8f2a7cef96a is in state STARTED 2025-05-14 02:50:03.157580 | orchestrator | 2025-05-14 02:50:03 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:50:03.158608 | orchestrator | 2025-05-14 02:50:03 | INFO  | Task 7b74e9fe-277f-43b5-9289-ced26c5f6132 is in state STARTED 2025-05-14 02:50:03.158651 | orchestrator | 2025-05-14 02:50:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:50:06.199184 | orchestrator | 2025-05-14 02:50:06 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:50:06.201566 | orchestrator | 2025-05-14 02:50:06 | INFO  | Task c195aab9-f9c1-4686-a988-b8f2a7cef96a is in state STARTED 2025-05-14 02:50:06.204116 | orchestrator | 2025-05-14 02:50:06 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:50:06.205491 | orchestrator | 2025-05-14 02:50:06 | INFO  | Task 7b74e9fe-277f-43b5-9289-ced26c5f6132 is in state STARTED 2025-05-14 02:50:06.205522 | orchestrator | 2025-05-14 02:50:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:50:09.246982 | orchestrator | 2025-05-14 02:50:09 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:50:09.248542 | orchestrator | 2025-05-14 02:50:09 | INFO  | Task c195aab9-f9c1-4686-a988-b8f2a7cef96a is in state STARTED 2025-05-14 02:50:09.249680 | orchestrator | 2025-05-14 02:50:09 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:50:09.250766 | orchestrator | 2025-05-14 02:50:09 | INFO  | Task 7b74e9fe-277f-43b5-9289-ced26c5f6132 is in state STARTED 2025-05-14 02:50:09.250799 | orchestrator | 2025-05-14 02:50:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:50:12.300457 | orchestrator | 2025-05-14 02:50:12 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:50:12.301582 | orchestrator | 2025-05-14 02:50:12 | INFO  | Task c195aab9-f9c1-4686-a988-b8f2a7cef96a is in state STARTED 2025-05-14 02:50:12.304584 | orchestrator | 2025-05-14 02:50:12 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:50:12.307752 | orchestrator | 2025-05-14 02:50:12 | INFO  | Task 7b74e9fe-277f-43b5-9289-ced26c5f6132 is in state STARTED 2025-05-14 02:50:12.307805 | orchestrator | 2025-05-14 02:50:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:50:15.359596 | orchestrator | 2025-05-14 02:50:15 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:50:15.364034 | orchestrator | 2025-05-14 02:50:15 | INFO  | Task c195aab9-f9c1-4686-a988-b8f2a7cef96a is in state STARTED 2025-05-14 02:50:15.364120 | orchestrator | 2025-05-14 02:50:15 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:50:15.368009 | orchestrator | 2025-05-14 02:50:15 | INFO  | Task 7b74e9fe-277f-43b5-9289-ced26c5f6132 is in state SUCCESS 2025-05-14 02:50:15.368087 | orchestrator | 2025-05-14 02:50:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:50:18.418644 | orchestrator | 2025-05-14 02:50:18 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:50:18.420615 | orchestrator | 2025-05-14 02:50:18 | INFO  | Task c195aab9-f9c1-4686-a988-b8f2a7cef96a is in state STARTED 2025-05-14 02:50:18.422442 | orchestrator | 2025-05-14 02:50:18 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:50:18.422472 | orchestrator | 2025-05-14 02:50:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:50:21.471620 | orchestrator | 2025-05-14 02:50:21 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:50:21.471758 | orchestrator | 2025-05-14 02:50:21 | INFO  | Task c195aab9-f9c1-4686-a988-b8f2a7cef96a is in state STARTED 2025-05-14 02:50:21.471938 | orchestrator | 2025-05-14 02:50:21 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:50:21.472272 | orchestrator | 2025-05-14 02:50:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:50:24.519291 | orchestrator | 2025-05-14 02:50:24 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:50:24.522118 | orchestrator | 2025-05-14 02:50:24 | INFO  | Task c195aab9-f9c1-4686-a988-b8f2a7cef96a is in state STARTED 2025-05-14 02:50:24.523338 | orchestrator | 2025-05-14 02:50:24 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:50:24.523367 | orchestrator | 2025-05-14 02:50:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:50:27.577224 | orchestrator | 2025-05-14 02:50:27 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:50:27.578881 | orchestrator | 2025-05-14 02:50:27 | INFO  | Task c195aab9-f9c1-4686-a988-b8f2a7cef96a is in state STARTED 2025-05-14 02:50:27.580445 | orchestrator | 2025-05-14 02:50:27 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:50:27.580511 | orchestrator | 2025-05-14 02:50:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:50:30.625885 | orchestrator | 2025-05-14 02:50:30 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:50:30.626367 | orchestrator | 2025-05-14 02:50:30 | INFO  | Task c195aab9-f9c1-4686-a988-b8f2a7cef96a is in state STARTED 2025-05-14 02:50:30.627671 | orchestrator | 2025-05-14 02:50:30 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:50:30.627722 | orchestrator | 2025-05-14 02:50:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:50:33.674394 | orchestrator | 2025-05-14 02:50:33 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:50:33.674499 | orchestrator | 2025-05-14 02:50:33 | INFO  | Task c195aab9-f9c1-4686-a988-b8f2a7cef96a is in state STARTED 2025-05-14 02:50:33.675406 | orchestrator | 2025-05-14 02:50:33 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:50:33.675500 | orchestrator | 2025-05-14 02:50:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:50:36.726362 | orchestrator | 2025-05-14 02:50:36 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:50:36.726756 | orchestrator | 2025-05-14 02:50:36 | INFO  | Task c195aab9-f9c1-4686-a988-b8f2a7cef96a is in state STARTED 2025-05-14 02:50:36.728022 | orchestrator | 2025-05-14 02:50:36 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:50:36.728151 | orchestrator | 2025-05-14 02:50:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:50:39.775607 | orchestrator | 2025-05-14 02:50:39 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:50:39.778003 | orchestrator | 2025-05-14 02:50:39 | INFO  | Task c195aab9-f9c1-4686-a988-b8f2a7cef96a is in state SUCCESS 2025-05-14 02:50:39.779426 | orchestrator | 2025-05-14 02:50:39.779463 | orchestrator | 2025-05-14 02:50:39.779476 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:50:39.779488 | orchestrator | 2025-05-14 02:50:39.779500 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:50:39.779512 | orchestrator | Wednesday 14 May 2025 02:48:40 +0000 (0:00:00.323) 0:00:00.323 ********* 2025-05-14 02:50:39.779539 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:50:39.779551 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:50:39.779562 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:50:39.779573 | orchestrator | 2025-05-14 02:50:39.779584 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:50:39.779594 | orchestrator | Wednesday 14 May 2025 02:48:41 +0000 (0:00:00.416) 0:00:00.740 ********* 2025-05-14 02:50:39.779605 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-05-14 02:50:39.779616 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-05-14 02:50:39.779649 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-05-14 02:50:39.779661 | orchestrator | 2025-05-14 02:50:39.779672 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-05-14 02:50:39.779682 | orchestrator | 2025-05-14 02:50:39.779693 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-14 02:50:39.779704 | orchestrator | Wednesday 14 May 2025 02:48:41 +0000 (0:00:00.305) 0:00:01.046 ********* 2025-05-14 02:50:39.779715 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:50:39.779726 | orchestrator | 2025-05-14 02:50:39.779737 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-05-14 02:50:39.779783 | orchestrator | Wednesday 14 May 2025 02:48:42 +0000 (0:00:00.757) 0:00:01.803 ********* 2025-05-14 02:50:39.779796 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-05-14 02:50:39.779906 | orchestrator | 2025-05-14 02:50:39.779918 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-05-14 02:50:39.779929 | orchestrator | Wednesday 14 May 2025 02:48:45 +0000 (0:00:03.659) 0:00:05.463 ********* 2025-05-14 02:50:39.779939 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-05-14 02:50:39.779950 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-05-14 02:50:39.779960 | orchestrator | 2025-05-14 02:50:39.779972 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-05-14 02:50:39.779984 | orchestrator | Wednesday 14 May 2025 02:48:52 +0000 (0:00:06.400) 0:00:11.863 ********* 2025-05-14 02:50:39.780058 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-14 02:50:39.780072 | orchestrator | 2025-05-14 02:50:39.780134 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-05-14 02:50:39.780151 | orchestrator | Wednesday 14 May 2025 02:48:55 +0000 (0:00:03.482) 0:00:15.346 ********* 2025-05-14 02:50:39.780170 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-14 02:50:39.780206 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-05-14 02:50:39.780217 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-05-14 02:50:39.780228 | orchestrator | 2025-05-14 02:50:39.780238 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-05-14 02:50:39.780249 | orchestrator | Wednesday 14 May 2025 02:49:04 +0000 (0:00:08.256) 0:00:23.602 ********* 2025-05-14 02:50:39.780260 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-14 02:50:39.780271 | orchestrator | 2025-05-14 02:50:39.780281 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-05-14 02:50:39.780292 | orchestrator | Wednesday 14 May 2025 02:49:07 +0000 (0:00:03.274) 0:00:26.877 ********* 2025-05-14 02:50:39.780303 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-05-14 02:50:39.780313 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-05-14 02:50:39.780324 | orchestrator | 2025-05-14 02:50:39.780334 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-05-14 02:50:39.780344 | orchestrator | Wednesday 14 May 2025 02:49:15 +0000 (0:00:07.961) 0:00:34.838 ********* 2025-05-14 02:50:39.780355 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-05-14 02:50:39.780365 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-05-14 02:50:39.780376 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-05-14 02:50:39.780386 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-05-14 02:50:39.780397 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-05-14 02:50:39.780435 | orchestrator | 2025-05-14 02:50:39.780446 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-14 02:50:39.780467 | orchestrator | Wednesday 14 May 2025 02:49:31 +0000 (0:00:16.031) 0:00:50.870 ********* 2025-05-14 02:50:39.780478 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:50:39.780530 | orchestrator | 2025-05-14 02:50:39.780541 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-05-14 02:50:39.780552 | orchestrator | Wednesday 14 May 2025 02:49:32 +0000 (0:00:00.960) 0:00:51.831 ********* 2025-05-14 02:50:39.780579 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "os_nova_flavor", "changed": false, "extra_data": {"data": null, "details": "503 Service Unavailable: No server is available to handle this request.: ", "response": "

503 Service Unavailable

\nNo server is available to handle this request.\n\n"}, "msg": "HttpException: 503: Server Error for url: https://api-int.testbed.osism.xyz:8774/v2.1/flavors/amphora, 503 Service Unavailable: No server is available to handle this request.: "} 2025-05-14 02:50:39.780595 | orchestrator | 2025-05-14 02:50:39.780607 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:50:39.780625 | orchestrator | testbed-node-0 : ok=11  changed=5  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-05-14 02:50:39.780637 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:50:39.780648 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:50:39.780658 | orchestrator | 2025-05-14 02:50:39.780669 | orchestrator | 2025-05-14 02:50:39.780680 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:50:39.780690 | orchestrator | Wednesday 14 May 2025 02:49:35 +0000 (0:00:03.491) 0:00:55.323 ********* 2025-05-14 02:50:39.780701 | orchestrator | =============================================================================== 2025-05-14 02:50:39.780711 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.03s 2025-05-14 02:50:39.780802 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.26s 2025-05-14 02:50:39.780815 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.96s 2025-05-14 02:50:39.780826 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.40s 2025-05-14 02:50:39.780836 | orchestrator | service-ks-register : octavia | Creating services ----------------------- 3.66s 2025-05-14 02:50:39.780847 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 3.49s 2025-05-14 02:50:39.780857 | orchestrator | service-ks-register : octavia | Creating projects ----------------------- 3.48s 2025-05-14 02:50:39.780868 | orchestrator | service-ks-register : octavia | Creating roles -------------------------- 3.27s 2025-05-14 02:50:39.780879 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.96s 2025-05-14 02:50:39.780889 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.76s 2025-05-14 02:50:39.780900 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.42s 2025-05-14 02:50:39.780910 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.31s 2025-05-14 02:50:39.780921 | orchestrator | 2025-05-14 02:50:39.780931 | orchestrator | 2025-05-14 02:50:39.780942 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:50:39.780953 | orchestrator | 2025-05-14 02:50:39.780964 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:50:39.780974 | orchestrator | Wednesday 14 May 2025 02:48:13 +0000 (0:00:00.301) 0:00:00.301 ********* 2025-05-14 02:50:39.781016 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:50:39.781028 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:50:39.781039 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:50:39.781050 | orchestrator | 2025-05-14 02:50:39.781060 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:50:39.781080 | orchestrator | Wednesday 14 May 2025 02:48:13 +0000 (0:00:00.503) 0:00:00.805 ********* 2025-05-14 02:50:39.781091 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-05-14 02:50:39.781102 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-05-14 02:50:39.781112 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-05-14 02:50:39.781123 | orchestrator | 2025-05-14 02:50:39.781133 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-05-14 02:50:39.781144 | orchestrator | 2025-05-14 02:50:39.781154 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-05-14 02:50:39.781165 | orchestrator | Wednesday 14 May 2025 02:48:14 +0000 (0:00:00.713) 0:00:01.519 ********* 2025-05-14 02:50:39.781205 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:50:39.781217 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:50:39.781228 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:50:39.781239 | orchestrator | 2025-05-14 02:50:39.781249 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:50:39.781260 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:50:39.781271 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:50:39.781282 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:50:39.781293 | orchestrator | 2025-05-14 02:50:39.781303 | orchestrator | 2025-05-14 02:50:39.781314 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:50:39.781325 | orchestrator | Wednesday 14 May 2025 02:50:14 +0000 (0:01:59.928) 0:02:01.447 ********* 2025-05-14 02:50:39.781335 | orchestrator | =============================================================================== 2025-05-14 02:50:39.781346 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 119.93s 2025-05-14 02:50:39.781357 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.71s 2025-05-14 02:50:39.781368 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.50s 2025-05-14 02:50:39.781378 | orchestrator | 2025-05-14 02:50:39.781389 | orchestrator | 2025-05-14 02:50:39.781399 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:50:39.781410 | orchestrator | 2025-05-14 02:50:39.781420 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:50:39.781441 | orchestrator | Wednesday 14 May 2025 02:48:42 +0000 (0:00:00.328) 0:00:00.328 ********* 2025-05-14 02:50:39.781452 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:50:39.781463 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:50:39.781474 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:50:39.781484 | orchestrator | 2025-05-14 02:50:39.781495 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:50:39.781513 | orchestrator | Wednesday 14 May 2025 02:48:42 +0000 (0:00:00.408) 0:00:00.736 ********* 2025-05-14 02:50:39.781532 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-05-14 02:50:39.781546 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-05-14 02:50:39.781557 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-05-14 02:50:39.781567 | orchestrator | 2025-05-14 02:50:39.781578 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-05-14 02:50:39.781588 | orchestrator | 2025-05-14 02:50:39.781599 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-05-14 02:50:39.781610 | orchestrator | Wednesday 14 May 2025 02:48:43 +0000 (0:00:00.297) 0:00:01.033 ********* 2025-05-14 02:50:39.781620 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:50:39.781631 | orchestrator | 2025-05-14 02:50:39.781642 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-05-14 02:50:39.781659 | orchestrator | Wednesday 14 May 2025 02:48:43 +0000 (0:00:00.793) 0:00:01.827 ********* 2025-05-14 02:50:39.781672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 02:50:39.781686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 02:50:39.781697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 02:50:39.781708 | orchestrator | 2025-05-14 02:50:39.781719 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-05-14 02:50:39.781730 | orchestrator | Wednesday 14 May 2025 02:48:44 +0000 (0:00:00.874) 0:00:02.702 ********* 2025-05-14 02:50:39.781740 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-05-14 02:50:39.781751 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-05-14 02:50:39.781762 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-14 02:50:39.781773 | orchestrator | 2025-05-14 02:50:39.781783 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-05-14 02:50:39.781793 | orchestrator | Wednesday 14 May 2025 02:48:45 +0000 (0:00:00.532) 0:00:03.234 ********* 2025-05-14 02:50:39.781804 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:50:39.781815 | orchestrator | 2025-05-14 02:50:39.781825 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-05-14 02:50:39.781836 | orchestrator | Wednesday 14 May 2025 02:48:46 +0000 (0:00:00.631) 0:00:03.866 ********* 2025-05-14 02:50:39.781861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 02:50:39.781880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 02:50:39.781892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 02:50:39.781903 | orchestrator | 2025-05-14 02:50:39.781914 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-05-14 02:50:39.781924 | orchestrator | Wednesday 14 May 2025 02:48:47 +0000 (0:00:01.396) 0:00:05.262 ********* 2025-05-14 02:50:39.781935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-14 02:50:39.781947 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:50:39.781958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-14 02:50:39.781969 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:50:39.781987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-14 02:50:39.782007 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:50:39.782065 | orchestrator | 2025-05-14 02:50:39.782081 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-05-14 02:50:39.782092 | orchestrator | Wednesday 14 May 2025 02:48:48 +0000 (0:00:00.699) 0:00:05.962 ********* 2025-05-14 02:50:39.782108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-14 02:50:39.782119 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:50:39.782130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-14 02:50:39.782141 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:50:39.782153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-14 02:50:39.782164 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:50:39.782237 | orchestrator | 2025-05-14 02:50:39.782250 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-05-14 02:50:39.782261 | orchestrator | Wednesday 14 May 2025 02:48:48 +0000 (0:00:00.714) 0:00:06.676 ********* 2025-05-14 02:50:39.782273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 02:50:39.782284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 02:50:39.782328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 02:50:39.782341 | orchestrator | 2025-05-14 02:50:39.782351 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-05-14 02:50:39.782363 | orchestrator | Wednesday 14 May 2025 02:48:50 +0000 (0:00:01.610) 0:00:08.286 ********* 2025-05-14 02:50:39.782374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 02:50:39.782386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 02:50:39.782398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 02:50:39.782409 | orchestrator | 2025-05-14 02:50:39.782419 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-05-14 02:50:39.782430 | orchestrator | Wednesday 14 May 2025 02:48:51 +0000 (0:00:01.526) 0:00:09.813 ********* 2025-05-14 02:50:39.782441 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:50:39.782452 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:50:39.782463 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:50:39.782474 | orchestrator | 2025-05-14 02:50:39.782485 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-05-14 02:50:39.782496 | orchestrator | Wednesday 14 May 2025 02:48:52 +0000 (0:00:00.286) 0:00:10.099 ********* 2025-05-14 02:50:39.782506 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-14 02:50:39.782526 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-14 02:50:39.782538 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-14 02:50:39.782549 | orchestrator | 2025-05-14 02:50:39.782559 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-05-14 02:50:39.782570 | orchestrator | Wednesday 14 May 2025 02:48:53 +0000 (0:00:01.419) 0:00:11.518 ********* 2025-05-14 02:50:39.782581 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-14 02:50:39.782591 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-14 02:50:39.782602 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-14 02:50:39.782613 | orchestrator | 2025-05-14 02:50:39.782631 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-05-14 02:50:39.782642 | orchestrator | Wednesday 14 May 2025 02:48:55 +0000 (0:00:01.501) 0:00:13.020 ********* 2025-05-14 02:50:39.782653 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-14 02:50:39.782663 | orchestrator | 2025-05-14 02:50:39.782674 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-05-14 02:50:39.782696 | orchestrator | Wednesday 14 May 2025 02:48:55 +0000 (0:00:00.458) 0:00:13.478 ********* 2025-05-14 02:50:39.782708 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-05-14 02:50:39.782719 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-05-14 02:50:39.782729 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:50:39.782741 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:50:39.782760 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:50:39.782772 | orchestrator | 2025-05-14 02:50:39.782782 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-05-14 02:50:39.782792 | orchestrator | Wednesday 14 May 2025 02:48:56 +0000 (0:00:00.884) 0:00:14.362 ********* 2025-05-14 02:50:39.782802 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:50:39.782812 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:50:39.782822 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:50:39.782831 | orchestrator | 2025-05-14 02:50:39.782841 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-05-14 02:50:39.782850 | orchestrator | Wednesday 14 May 2025 02:48:56 +0000 (0:00:00.447) 0:00:14.810 ********* 2025-05-14 02:50:39.782861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1064580, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.4905207, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.782872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1064580, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.4905207, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.782889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1064580, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.4905207, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.782900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1064572, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.3345175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.782916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1064572, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.3345175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.782931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1064572, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.3345175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.782942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1064569, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.3315175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.782952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1064569, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.3315175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.782962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1064569, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.3315175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.782979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1064576, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.3365176, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.782989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1064576, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.3365176, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783022 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1064576, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.3365176, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1064551, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.3255172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783044 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1064551, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.3255172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1064551, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.3255172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1064570, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.3315175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1064570, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.3315175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1064570, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.3315175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1064574, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.3355174, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1064574, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.3355174, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1064574, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.3355174, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1064549, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.3245173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1064549, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.3245173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1064549, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.3245173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1064461, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.2895164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1064461, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.2895164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1064461, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.2895164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1064554, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.3265173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1064554, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.3265173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1064554, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.3265173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1064463, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.2915165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1064463, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.2915165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1064463, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.2915165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1064573, 'dev': 166, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187642.3345175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1064573, 'dev': 166, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187642.3345175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1064573, 'dev': 166, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187642.3345175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1064556, 'dev': 166, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187642.3275173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1064556, 'dev': 166, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187642.3275173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1064556, 'dev': 166, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187642.3275173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1064577, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.3375175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1064577, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.3375175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1064577, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.3375175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1064547, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.3235173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1064547, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.3235173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1064547, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.3235173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1064571, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.3335176, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1064571, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.3335176, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1064571, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.3335176, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1064462, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.2905166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1064462, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.2905166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1064462, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.2905166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1064465, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.2935166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1064465, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.2935166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1064465, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.2935166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1064567, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.3305173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1064567, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.3305173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1065199, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.6565244, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1064567, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.3305173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1065199, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.6565244, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1065192, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.596523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1065199, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.6565244, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1065192, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.596523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1065454, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.6635246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1065192, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.596523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1065454, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.6635246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1064914, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.4905207, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1065454, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.6635246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1064914, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.4905207, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1065457, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.6675246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1064914, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.4905207, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1065457, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.6675246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1065457, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.6675246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1065442, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.6585245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1065442, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.6585245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1065442, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.6585245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1065445, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.6605244, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1065445, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.6605244, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1065445, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.6605244, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1064917, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.4925208, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1064917, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.4925208, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1064917, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.4925208, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.783990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1065195, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.596523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.784000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1065195, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.596523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.784011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1065464, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.6685247, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.784021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1065195, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.596523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.784031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1065464, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.6685247, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.784047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1065464, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.6685247, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.784066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1065451, 'dev': 166, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187642.6615245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.784077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1065451, 'dev': 166, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187642.6615245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.784087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1064928, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.4955208, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.784097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1065451, 'dev': 166, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747187642.6615245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.784107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1064928, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.4955208, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.784117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1064924, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.493521, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.784146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1064928, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.4955208, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.784157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1064924, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.493521, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.784167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1064935, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.497521, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.784197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1064935, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.497521, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.784208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1064924, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.493521, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.784218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1064946, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.595523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.784244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1064946, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.595523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.784256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1064935, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.497521, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.784266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1065469, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.6965253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.784276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1065469, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.6965253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.784286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1064946, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.595523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.784296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1065469, 'dev': 166, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747187642.6965253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-14 02:50:39.784312 | orchestrator | 2025-05-14 02:50:39.784322 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-05-14 02:50:39.784332 | orchestrator | Wednesday 14 May 2025 02:49:30 +0000 (0:00:34.000) 0:00:48.810 ********* 2025-05-14 02:50:39.784353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 02:50:39.784364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 02:50:39.784374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-14 02:50:39.784384 | orchestrator | 2025-05-14 02:50:39.784394 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-05-14 02:50:39.784403 | orchestrator | Wednesday 14 May 2025 02:49:32 +0000 (0:00:01.294) 0:00:50.104 ********* 2025-05-14 02:50:39.784413 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:50:39.784422 | orchestrator | 2025-05-14 02:50:39.784432 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-05-14 02:50:39.784441 | orchestrator | Wednesday 14 May 2025 02:49:35 +0000 (0:00:02.903) 0:00:53.008 ********* 2025-05-14 02:50:39.784451 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:50:39.784460 | orchestrator | 2025-05-14 02:50:39.784469 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-14 02:50:39.784479 | orchestrator | Wednesday 14 May 2025 02:49:37 +0000 (0:00:02.362) 0:00:55.371 ********* 2025-05-14 02:50:39.784488 | orchestrator | 2025-05-14 02:50:39.784498 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-14 02:50:39.784507 | orchestrator | Wednesday 14 May 2025 02:49:37 +0000 (0:00:00.062) 0:00:55.433 ********* 2025-05-14 02:50:39.784516 | orchestrator | 2025-05-14 02:50:39.784526 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-14 02:50:39.784535 | orchestrator | Wednesday 14 May 2025 02:49:37 +0000 (0:00:00.054) 0:00:55.487 ********* 2025-05-14 02:50:39.784545 | orchestrator | 2025-05-14 02:50:39.784555 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-05-14 02:50:39.784570 | orchestrator | Wednesday 14 May 2025 02:49:37 +0000 (0:00:00.222) 0:00:55.710 ********* 2025-05-14 02:50:39.784580 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:50:39.784589 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:50:39.784599 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:50:39.784608 | orchestrator | 2025-05-14 02:50:39.784617 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-05-14 02:50:39.784627 | orchestrator | Wednesday 14 May 2025 02:49:44 +0000 (0:00:06.851) 0:01:02.561 ********* 2025-05-14 02:50:39.784636 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:50:39.784645 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:50:39.784655 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-05-14 02:50:39.784665 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-05-14 02:50:39.784674 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:50:39.784684 | orchestrator | 2025-05-14 02:50:39.784694 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-05-14 02:50:39.784703 | orchestrator | Wednesday 14 May 2025 02:50:11 +0000 (0:00:26.837) 0:01:29.399 ********* 2025-05-14 02:50:39.784713 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:50:39.784722 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:50:39.784732 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:50:39.784741 | orchestrator | 2025-05-14 02:50:39.784750 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-05-14 02:50:39.784760 | orchestrator | Wednesday 14 May 2025 02:50:31 +0000 (0:00:19.727) 0:01:49.126 ********* 2025-05-14 02:50:39.784769 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:50:39.784779 | orchestrator | 2025-05-14 02:50:39.784788 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-05-14 02:50:39.784798 | orchestrator | Wednesday 14 May 2025 02:50:33 +0000 (0:00:02.435) 0:01:51.562 ********* 2025-05-14 02:50:39.784808 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:50:39.784888 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:50:39.784901 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:50:39.784911 | orchestrator | 2025-05-14 02:50:39.784920 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-05-14 02:50:39.784930 | orchestrator | Wednesday 14 May 2025 02:50:34 +0000 (0:00:00.496) 0:01:52.059 ********* 2025-05-14 02:50:39.784945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-05-14 02:50:39.784956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-05-14 02:50:39.784966 | orchestrator | 2025-05-14 02:50:39.784975 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-05-14 02:50:39.784984 | orchestrator | Wednesday 14 May 2025 02:50:36 +0000 (0:00:02.507) 0:01:54.566 ********* 2025-05-14 02:50:39.784994 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:50:39.785003 | orchestrator | 2025-05-14 02:50:39.785013 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:50:39.785022 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-14 02:50:39.785032 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-14 02:50:39.785048 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-14 02:50:39.785058 | orchestrator | 2025-05-14 02:50:39.785067 | orchestrator | 2025-05-14 02:50:39.785076 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:50:39.785085 | orchestrator | Wednesday 14 May 2025 02:50:37 +0000 (0:00:00.494) 0:01:55.060 ********* 2025-05-14 02:50:39.785095 | orchestrator | =============================================================================== 2025-05-14 02:50:39.785104 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 34.00s 2025-05-14 02:50:39.785114 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 26.84s 2025-05-14 02:50:39.785123 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 19.73s 2025-05-14 02:50:39.785132 | orchestrator | grafana : Restart first grafana container ------------------------------- 6.85s 2025-05-14 02:50:39.785142 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.90s 2025-05-14 02:50:39.785151 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.51s 2025-05-14 02:50:39.785161 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.44s 2025-05-14 02:50:39.785170 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.36s 2025-05-14 02:50:39.785202 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.61s 2025-05-14 02:50:39.785212 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.53s 2025-05-14 02:50:39.785221 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.50s 2025-05-14 02:50:39.785231 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.42s 2025-05-14 02:50:39.785240 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.40s 2025-05-14 02:50:39.785249 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.30s 2025-05-14 02:50:39.785259 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.88s 2025-05-14 02:50:39.785268 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.87s 2025-05-14 02:50:39.785278 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.79s 2025-05-14 02:50:39.785287 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.71s 2025-05-14 02:50:39.785296 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 0.70s 2025-05-14 02:50:39.785305 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.63s 2025-05-14 02:50:39.785315 | orchestrator | 2025-05-14 02:50:39 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:50:39.785324 | orchestrator | 2025-05-14 02:50:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:50:42.841341 | orchestrator | 2025-05-14 02:50:42 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:50:42.843647 | orchestrator | 2025-05-14 02:50:42 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:50:42.843706 | orchestrator | 2025-05-14 02:50:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:50:45.887889 | orchestrator | 2025-05-14 02:50:45 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:50:45.889061 | orchestrator | 2025-05-14 02:50:45 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:50:45.889144 | orchestrator | 2025-05-14 02:50:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:50:48.945832 | orchestrator | 2025-05-14 02:50:48 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:50:48.947568 | orchestrator | 2025-05-14 02:50:48 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:50:48.947676 | orchestrator | 2025-05-14 02:50:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:50:52.011660 | orchestrator | 2025-05-14 02:50:52 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:50:52.012158 | orchestrator | 2025-05-14 02:50:52 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:50:52.012513 | orchestrator | 2025-05-14 02:50:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:50:55.067417 | orchestrator | 2025-05-14 02:50:55 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:50:55.068225 | orchestrator | 2025-05-14 02:50:55 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:50:55.068251 | orchestrator | 2025-05-14 02:50:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:50:58.113787 | orchestrator | 2025-05-14 02:50:58 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:50:58.114695 | orchestrator | 2025-05-14 02:50:58 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:50:58.114753 | orchestrator | 2025-05-14 02:50:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:51:01.149506 | orchestrator | 2025-05-14 02:51:01 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:51:01.150119 | orchestrator | 2025-05-14 02:51:01 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:51:01.150150 | orchestrator | 2025-05-14 02:51:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:51:04.182963 | orchestrator | 2025-05-14 02:51:04 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:51:04.183114 | orchestrator | 2025-05-14 02:51:04 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:51:04.183141 | orchestrator | 2025-05-14 02:51:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:51:07.222433 | orchestrator | 2025-05-14 02:51:07 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:51:07.223537 | orchestrator | 2025-05-14 02:51:07 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:51:07.224893 | orchestrator | 2025-05-14 02:51:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:51:10.272356 | orchestrator | 2025-05-14 02:51:10 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:51:10.272830 | orchestrator | 2025-05-14 02:51:10 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:51:10.272843 | orchestrator | 2025-05-14 02:51:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:51:13.320084 | orchestrator | 2025-05-14 02:51:13 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:51:13.320226 | orchestrator | 2025-05-14 02:51:13 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:51:13.320244 | orchestrator | 2025-05-14 02:51:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:51:16.362923 | orchestrator | 2025-05-14 02:51:16 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:51:16.370153 | orchestrator | 2025-05-14 02:51:16 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:51:16.370278 | orchestrator | 2025-05-14 02:51:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:51:19.402212 | orchestrator | 2025-05-14 02:51:19 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:51:19.403718 | orchestrator | 2025-05-14 02:51:19 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:51:19.403750 | orchestrator | 2025-05-14 02:51:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:51:22.460569 | orchestrator | 2025-05-14 02:51:22 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:51:22.462729 | orchestrator | 2025-05-14 02:51:22 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:51:22.462745 | orchestrator | 2025-05-14 02:51:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:51:25.522926 | orchestrator | 2025-05-14 02:51:25 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:51:25.523647 | orchestrator | 2025-05-14 02:51:25 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:51:25.523684 | orchestrator | 2025-05-14 02:51:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:51:28.572083 | orchestrator | 2025-05-14 02:51:28 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:51:28.572854 | orchestrator | 2025-05-14 02:51:28 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:51:28.572894 | orchestrator | 2025-05-14 02:51:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:51:31.630085 | orchestrator | 2025-05-14 02:51:31 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:51:31.631274 | orchestrator | 2025-05-14 02:51:31 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:51:31.631324 | orchestrator | 2025-05-14 02:51:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:51:34.721811 | orchestrator | 2025-05-14 02:51:34 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:51:34.725964 | orchestrator | 2025-05-14 02:51:34 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:51:34.726332 | orchestrator | 2025-05-14 02:51:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:51:37.778799 | orchestrator | 2025-05-14 02:51:37 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:51:37.779109 | orchestrator | 2025-05-14 02:51:37 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:51:37.779129 | orchestrator | 2025-05-14 02:51:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:51:40.832178 | orchestrator | 2025-05-14 02:51:40 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:51:40.832287 | orchestrator | 2025-05-14 02:51:40 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:51:40.832303 | orchestrator | 2025-05-14 02:51:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:51:43.888194 | orchestrator | 2025-05-14 02:51:43 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:51:43.888303 | orchestrator | 2025-05-14 02:51:43 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:51:43.888319 | orchestrator | 2025-05-14 02:51:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:51:46.940238 | orchestrator | 2025-05-14 02:51:46 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:51:47.043597 | orchestrator | 2025-05-14 02:51:46 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:51:47.043682 | orchestrator | 2025-05-14 02:51:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:51:49.987862 | orchestrator | 2025-05-14 02:51:49 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:51:49.989857 | orchestrator | 2025-05-14 02:51:49 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:51:49.990392 | orchestrator | 2025-05-14 02:51:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:51:53.060650 | orchestrator | 2025-05-14 02:51:53 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:51:53.061783 | orchestrator | 2025-05-14 02:51:53 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:51:53.062000 | orchestrator | 2025-05-14 02:51:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:51:56.111261 | orchestrator | 2025-05-14 02:51:56 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:51:56.113343 | orchestrator | 2025-05-14 02:51:56 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:51:56.113426 | orchestrator | 2025-05-14 02:51:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:51:59.156521 | orchestrator | 2025-05-14 02:51:59 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:51:59.157752 | orchestrator | 2025-05-14 02:51:59 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:51:59.157819 | orchestrator | 2025-05-14 02:51:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:52:02.202982 | orchestrator | 2025-05-14 02:52:02 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:52:02.203174 | orchestrator | 2025-05-14 02:52:02 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:52:02.203192 | orchestrator | 2025-05-14 02:52:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:52:05.267480 | orchestrator | 2025-05-14 02:52:05 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:52:05.271443 | orchestrator | 2025-05-14 02:52:05 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:52:05.271508 | orchestrator | 2025-05-14 02:52:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:52:08.313318 | orchestrator | 2025-05-14 02:52:08 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:52:08.315419 | orchestrator | 2025-05-14 02:52:08 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:52:08.315463 | orchestrator | 2025-05-14 02:52:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:52:11.361157 | orchestrator | 2025-05-14 02:52:11 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:52:11.363017 | orchestrator | 2025-05-14 02:52:11 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:52:11.363085 | orchestrator | 2025-05-14 02:52:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:52:14.411525 | orchestrator | 2025-05-14 02:52:14 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:52:14.411615 | orchestrator | 2025-05-14 02:52:14 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:52:14.411624 | orchestrator | 2025-05-14 02:52:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:52:17.460366 | orchestrator | 2025-05-14 02:52:17 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:52:17.462477 | orchestrator | 2025-05-14 02:52:17 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:52:17.462557 | orchestrator | 2025-05-14 02:52:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:52:20.526537 | orchestrator | 2025-05-14 02:52:20 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:52:20.526667 | orchestrator | 2025-05-14 02:52:20 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:52:20.526691 | orchestrator | 2025-05-14 02:52:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:52:23.585068 | orchestrator | 2025-05-14 02:52:23 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:52:23.585395 | orchestrator | 2025-05-14 02:52:23 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:52:23.585431 | orchestrator | 2025-05-14 02:52:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:52:26.637764 | orchestrator | 2025-05-14 02:52:26 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:52:26.639225 | orchestrator | 2025-05-14 02:52:26 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:52:26.639288 | orchestrator | 2025-05-14 02:52:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:52:29.686987 | orchestrator | 2025-05-14 02:52:29 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:52:29.688504 | orchestrator | 2025-05-14 02:52:29 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:52:29.688525 | orchestrator | 2025-05-14 02:52:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:52:32.738952 | orchestrator | 2025-05-14 02:52:32 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:52:32.739964 | orchestrator | 2025-05-14 02:52:32 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:52:32.740009 | orchestrator | 2025-05-14 02:52:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:52:35.794079 | orchestrator | 2025-05-14 02:52:35 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:52:35.794457 | orchestrator | 2025-05-14 02:52:35 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:52:35.794486 | orchestrator | 2025-05-14 02:52:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:52:38.856552 | orchestrator | 2025-05-14 02:52:38 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:52:38.857777 | orchestrator | 2025-05-14 02:52:38 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:52:38.857819 | orchestrator | 2025-05-14 02:52:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:52:41.907696 | orchestrator | 2025-05-14 02:52:41 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:52:41.909370 | orchestrator | 2025-05-14 02:52:41 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:52:41.909408 | orchestrator | 2025-05-14 02:52:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:52:44.961640 | orchestrator | 2025-05-14 02:52:44 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:52:44.962790 | orchestrator | 2025-05-14 02:52:44 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:52:44.962833 | orchestrator | 2025-05-14 02:52:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:52:48.009911 | orchestrator | 2025-05-14 02:52:48 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:52:48.011318 | orchestrator | 2025-05-14 02:52:48 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:52:48.011356 | orchestrator | 2025-05-14 02:52:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:52:51.068524 | orchestrator | 2025-05-14 02:52:51 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:52:51.069558 | orchestrator | 2025-05-14 02:52:51 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:52:51.069602 | orchestrator | 2025-05-14 02:52:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:52:54.118352 | orchestrator | 2025-05-14 02:52:54 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:52:54.119270 | orchestrator | 2025-05-14 02:52:54 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:52:54.119315 | orchestrator | 2025-05-14 02:52:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:52:57.163614 | orchestrator | 2025-05-14 02:52:57 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:52:57.165449 | orchestrator | 2025-05-14 02:52:57 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:52:57.165513 | orchestrator | 2025-05-14 02:52:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:53:00.203025 | orchestrator | 2025-05-14 02:53:00 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:53:00.206597 | orchestrator | 2025-05-14 02:53:00 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:53:00.206666 | orchestrator | 2025-05-14 02:53:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:53:03.248920 | orchestrator | 2025-05-14 02:53:03 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:53:03.250770 | orchestrator | 2025-05-14 02:53:03 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:53:03.250826 | orchestrator | 2025-05-14 02:53:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:53:06.304151 | orchestrator | 2025-05-14 02:53:06 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:53:06.305762 | orchestrator | 2025-05-14 02:53:06 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:53:06.305871 | orchestrator | 2025-05-14 02:53:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:53:09.356228 | orchestrator | 2025-05-14 02:53:09 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:53:09.356996 | orchestrator | 2025-05-14 02:53:09 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:53:09.357028 | orchestrator | 2025-05-14 02:53:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:53:12.400191 | orchestrator | 2025-05-14 02:53:12 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:53:12.401188 | orchestrator | 2025-05-14 02:53:12 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:53:12.401514 | orchestrator | 2025-05-14 02:53:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:53:15.430550 | orchestrator | 2025-05-14 02:53:15 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:53:15.432573 | orchestrator | 2025-05-14 02:53:15 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:53:15.432628 | orchestrator | 2025-05-14 02:53:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:53:18.483815 | orchestrator | 2025-05-14 02:53:18 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:53:18.485379 | orchestrator | 2025-05-14 02:53:18 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:53:18.485438 | orchestrator | 2025-05-14 02:53:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:53:21.526896 | orchestrator | 2025-05-14 02:53:21 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:53:21.528855 | orchestrator | 2025-05-14 02:53:21 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:53:21.528900 | orchestrator | 2025-05-14 02:53:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:53:24.579136 | orchestrator | 2025-05-14 02:53:24 | INFO  | Task f6fcd72e-e29a-4d00-8031-fbf894b1ef6f is in state STARTED 2025-05-14 02:53:24.579597 | orchestrator | 2025-05-14 02:53:24 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:53:24.581689 | orchestrator | 2025-05-14 02:53:24 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:53:24.582091 | orchestrator | 2025-05-14 02:53:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:53:27.632643 | orchestrator | 2025-05-14 02:53:27 | INFO  | Task f6fcd72e-e29a-4d00-8031-fbf894b1ef6f is in state STARTED 2025-05-14 02:53:27.633826 | orchestrator | 2025-05-14 02:53:27 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:53:27.634585 | orchestrator | 2025-05-14 02:53:27 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:53:27.634795 | orchestrator | 2025-05-14 02:53:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:53:30.688488 | orchestrator | 2025-05-14 02:53:30 | INFO  | Task f6fcd72e-e29a-4d00-8031-fbf894b1ef6f is in state STARTED 2025-05-14 02:53:30.692371 | orchestrator | 2025-05-14 02:53:30 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:53:30.695054 | orchestrator | 2025-05-14 02:53:30 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:53:30.695523 | orchestrator | 2025-05-14 02:53:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:53:33.758851 | orchestrator | 2025-05-14 02:53:33 | INFO  | Task f6fcd72e-e29a-4d00-8031-fbf894b1ef6f is in state STARTED 2025-05-14 02:53:33.758956 | orchestrator | 2025-05-14 02:53:33 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:53:33.763377 | orchestrator | 2025-05-14 02:53:33 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:53:33.763805 | orchestrator | 2025-05-14 02:53:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:53:36.809948 | orchestrator | 2025-05-14 02:53:36 | INFO  | Task f6fcd72e-e29a-4d00-8031-fbf894b1ef6f is in state SUCCESS 2025-05-14 02:53:36.811084 | orchestrator | 2025-05-14 02:53:36 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:53:36.813015 | orchestrator | 2025-05-14 02:53:36 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:53:36.813242 | orchestrator | 2025-05-14 02:53:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:53:39.864900 | orchestrator | 2025-05-14 02:53:39 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:53:39.866389 | orchestrator | 2025-05-14 02:53:39 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:53:39.866444 | orchestrator | 2025-05-14 02:53:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:53:42.914485 | orchestrator | 2025-05-14 02:53:42 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:53:42.916348 | orchestrator | 2025-05-14 02:53:42 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:53:42.916430 | orchestrator | 2025-05-14 02:53:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:53:45.958499 | orchestrator | 2025-05-14 02:53:45 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:53:45.959402 | orchestrator | 2025-05-14 02:53:45 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:53:45.959433 | orchestrator | 2025-05-14 02:53:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:53:49.005270 | orchestrator | 2025-05-14 02:53:49 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:53:49.005359 | orchestrator | 2025-05-14 02:53:49 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:53:49.005368 | orchestrator | 2025-05-14 02:53:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:53:52.059006 | orchestrator | 2025-05-14 02:53:52 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:53:52.064919 | orchestrator | 2025-05-14 02:53:52 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:53:52.064980 | orchestrator | 2025-05-14 02:53:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:53:55.096182 | orchestrator | 2025-05-14 02:53:55 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:53:55.097170 | orchestrator | 2025-05-14 02:53:55 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:53:55.097262 | orchestrator | 2025-05-14 02:53:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:53:58.132618 | orchestrator | 2025-05-14 02:53:58 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:53:58.132699 | orchestrator | 2025-05-14 02:53:58 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:53:58.132708 | orchestrator | 2025-05-14 02:53:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:54:01.184775 | orchestrator | 2025-05-14 02:54:01 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:54:01.185627 | orchestrator | 2025-05-14 02:54:01 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:54:01.185671 | orchestrator | 2025-05-14 02:54:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:54:04.232302 | orchestrator | 2025-05-14 02:54:04 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:54:04.232419 | orchestrator | 2025-05-14 02:54:04 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:54:04.232435 | orchestrator | 2025-05-14 02:54:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:54:07.275336 | orchestrator | 2025-05-14 02:54:07 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:54:07.276704 | orchestrator | 2025-05-14 02:54:07 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:54:07.276787 | orchestrator | 2025-05-14 02:54:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:54:10.316513 | orchestrator | 2025-05-14 02:54:10 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:54:10.318368 | orchestrator | 2025-05-14 02:54:10 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:54:10.318445 | orchestrator | 2025-05-14 02:54:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:54:13.371073 | orchestrator | 2025-05-14 02:54:13 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:54:13.373014 | orchestrator | 2025-05-14 02:54:13 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:54:13.373123 | orchestrator | 2025-05-14 02:54:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:54:16.431487 | orchestrator | 2025-05-14 02:54:16 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:54:16.433196 | orchestrator | 2025-05-14 02:54:16 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:54:16.433232 | orchestrator | 2025-05-14 02:54:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:54:19.488398 | orchestrator | 2025-05-14 02:54:19 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:54:19.490002 | orchestrator | 2025-05-14 02:54:19 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:54:19.490106 | orchestrator | 2025-05-14 02:54:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:54:22.538992 | orchestrator | 2025-05-14 02:54:22 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:54:22.540299 | orchestrator | 2025-05-14 02:54:22 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:54:22.540580 | orchestrator | 2025-05-14 02:54:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:54:25.587239 | orchestrator | 2025-05-14 02:54:25 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:54:25.588480 | orchestrator | 2025-05-14 02:54:25 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:54:25.588512 | orchestrator | 2025-05-14 02:54:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:54:28.641092 | orchestrator | 2025-05-14 02:54:28 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:54:28.642891 | orchestrator | 2025-05-14 02:54:28 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state STARTED 2025-05-14 02:54:28.642959 | orchestrator | 2025-05-14 02:54:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:54:31.682582 | orchestrator | 2025-05-14 02:54:31 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:54:31.690393 | orchestrator | 2025-05-14 02:54:31 | INFO  | Task 981f61e9-b628-4c1b-8f58-3a59336376a1 is in state SUCCESS 2025-05-14 02:54:31.692267 | orchestrator | 2025-05-14 02:54:31.692329 | orchestrator | None 2025-05-14 02:54:31.692339 | orchestrator | 2025-05-14 02:54:31.692345 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-14 02:54:31.692351 | orchestrator | 2025-05-14 02:54:31.692356 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-05-14 02:54:31.692361 | orchestrator | Wednesday 14 May 2025 02:45:57 +0000 (0:00:00.232) 0:00:00.232 ********* 2025-05-14 02:54:31.692367 | orchestrator | changed: [testbed-manager] 2025-05-14 02:54:31.692373 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:54:31.692378 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:54:31.692384 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:54:31.692389 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:54:31.692393 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:54:31.692398 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:54:31.692403 | orchestrator | 2025-05-14 02:54:31.692408 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-14 02:54:31.692430 | orchestrator | Wednesday 14 May 2025 02:45:58 +0000 (0:00:01.669) 0:00:01.901 ********* 2025-05-14 02:54:31.692435 | orchestrator | changed: [testbed-manager] 2025-05-14 02:54:31.692440 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:54:31.692445 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:54:31.692449 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:54:31.692455 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:54:31.692459 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:54:31.692464 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:54:31.692469 | orchestrator | 2025-05-14 02:54:31.692474 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-14 02:54:31.692478 | orchestrator | Wednesday 14 May 2025 02:46:00 +0000 (0:00:01.952) 0:00:03.854 ********* 2025-05-14 02:54:31.692483 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-05-14 02:54:31.692489 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-05-14 02:54:31.692493 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-05-14 02:54:31.692498 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-05-14 02:54:31.692503 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-05-14 02:54:31.692507 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-05-14 02:54:31.692512 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-05-14 02:54:31.692517 | orchestrator | 2025-05-14 02:54:31.692522 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-05-14 02:54:31.692526 | orchestrator | 2025-05-14 02:54:31.692531 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-05-14 02:54:31.692536 | orchestrator | Wednesday 14 May 2025 02:46:01 +0000 (0:00:00.913) 0:00:04.768 ********* 2025-05-14 02:54:31.692541 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:54:31.692546 | orchestrator | 2025-05-14 02:54:31.692550 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-05-14 02:54:31.692555 | orchestrator | Wednesday 14 May 2025 02:46:02 +0000 (0:00:01.150) 0:00:05.918 ********* 2025-05-14 02:54:31.692560 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-05-14 02:54:31.692565 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-05-14 02:54:31.692570 | orchestrator | 2025-05-14 02:54:31.692575 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-05-14 02:54:31.692580 | orchestrator | Wednesday 14 May 2025 02:46:08 +0000 (0:00:05.178) 0:00:11.097 ********* 2025-05-14 02:54:31.692585 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-14 02:54:31.692589 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-14 02:54:31.692594 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:54:31.692599 | orchestrator | 2025-05-14 02:54:31.692604 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-05-14 02:54:31.692608 | orchestrator | Wednesday 14 May 2025 02:46:13 +0000 (0:00:05.202) 0:00:16.299 ********* 2025-05-14 02:54:31.692613 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:54:31.692618 | orchestrator | 2025-05-14 02:54:31.692623 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-05-14 02:54:31.692627 | orchestrator | Wednesday 14 May 2025 02:46:14 +0000 (0:00:00.870) 0:00:17.169 ********* 2025-05-14 02:54:31.692632 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:54:31.692637 | orchestrator | 2025-05-14 02:54:31.692642 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-05-14 02:54:31.692647 | orchestrator | Wednesday 14 May 2025 02:46:16 +0000 (0:00:02.199) 0:00:19.369 ********* 2025-05-14 02:54:31.692651 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:54:31.692656 | orchestrator | 2025-05-14 02:54:31.692661 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-14 02:54:31.692666 | orchestrator | Wednesday 14 May 2025 02:46:21 +0000 (0:00:05.198) 0:00:24.567 ********* 2025-05-14 02:54:31.692671 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:54:31.692690 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.692695 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.692700 | orchestrator | 2025-05-14 02:54:31.692705 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-05-14 02:54:31.692709 | orchestrator | Wednesday 14 May 2025 02:46:21 +0000 (0:00:00.331) 0:00:24.898 ********* 2025-05-14 02:54:31.692714 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:54:31.692719 | orchestrator | 2025-05-14 02:54:31.692724 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-05-14 02:54:31.692729 | orchestrator | Wednesday 14 May 2025 02:46:51 +0000 (0:00:29.306) 0:00:54.205 ********* 2025-05-14 02:54:31.692733 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:54:31.692738 | orchestrator | 2025-05-14 02:54:31.692743 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-14 02:54:31.692748 | orchestrator | Wednesday 14 May 2025 02:47:05 +0000 (0:00:14.274) 0:01:08.479 ********* 2025-05-14 02:54:31.692753 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:54:31.692757 | orchestrator | 2025-05-14 02:54:31.692762 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-14 02:54:31.692767 | orchestrator | Wednesday 14 May 2025 02:47:19 +0000 (0:00:14.065) 0:01:22.544 ********* 2025-05-14 02:54:31.692782 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:54:31.692787 | orchestrator | 2025-05-14 02:54:31.692792 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-05-14 02:54:31.692797 | orchestrator | Wednesday 14 May 2025 02:47:20 +0000 (0:00:01.121) 0:01:23.666 ********* 2025-05-14 02:54:31.692802 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:54:31.692807 | orchestrator | 2025-05-14 02:54:31.692811 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-14 02:54:31.692816 | orchestrator | Wednesday 14 May 2025 02:47:21 +0000 (0:00:00.731) 0:01:24.397 ********* 2025-05-14 02:54:31.692821 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:54:31.692826 | orchestrator | 2025-05-14 02:54:31.692831 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-05-14 02:54:31.692835 | orchestrator | Wednesday 14 May 2025 02:47:22 +0000 (0:00:01.311) 0:01:25.709 ********* 2025-05-14 02:54:31.692840 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:54:31.692991 | orchestrator | 2025-05-14 02:54:31.693038 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-05-14 02:54:31.693045 | orchestrator | Wednesday 14 May 2025 02:47:39 +0000 (0:00:17.228) 0:01:42.937 ********* 2025-05-14 02:54:31.693051 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:54:31.693056 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.693062 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.693067 | orchestrator | 2025-05-14 02:54:31.693073 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-05-14 02:54:31.693079 | orchestrator | 2025-05-14 02:54:31.693084 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-05-14 02:54:31.693090 | orchestrator | Wednesday 14 May 2025 02:47:40 +0000 (0:00:00.336) 0:01:43.273 ********* 2025-05-14 02:54:31.693096 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:54:31.693101 | orchestrator | 2025-05-14 02:54:31.693107 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-05-14 02:54:31.693112 | orchestrator | Wednesday 14 May 2025 02:47:41 +0000 (0:00:00.839) 0:01:44.112 ********* 2025-05-14 02:54:31.693118 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.693124 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.693129 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:54:31.693135 | orchestrator | 2025-05-14 02:54:31.693140 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-05-14 02:54:31.693145 | orchestrator | Wednesday 14 May 2025 02:47:43 +0000 (0:00:02.339) 0:01:46.452 ********* 2025-05-14 02:54:31.693156 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.693162 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.693167 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:54:31.693172 | orchestrator | 2025-05-14 02:54:31.693178 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-05-14 02:54:31.693183 | orchestrator | Wednesday 14 May 2025 02:47:45 +0000 (0:00:02.354) 0:01:48.806 ********* 2025-05-14 02:54:31.693189 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:54:31.693195 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.693200 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.693206 | orchestrator | 2025-05-14 02:54:31.693212 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-05-14 02:54:31.693217 | orchestrator | Wednesday 14 May 2025 02:47:46 +0000 (0:00:00.669) 0:01:49.476 ********* 2025-05-14 02:54:31.693223 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-14 02:54:31.693228 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.693234 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-14 02:54:31.693240 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.693246 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-05-14 02:54:31.693252 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-05-14 02:54:31.693257 | orchestrator | 2025-05-14 02:54:31.693263 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-05-14 02:54:31.693283 | orchestrator | Wednesday 14 May 2025 02:47:56 +0000 (0:00:09.523) 0:01:59.000 ********* 2025-05-14 02:54:31.693288 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:54:31.693294 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.693298 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.693303 | orchestrator | 2025-05-14 02:54:31.693308 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-05-14 02:54:31.693314 | orchestrator | Wednesday 14 May 2025 02:47:57 +0000 (0:00:01.048) 0:02:00.048 ********* 2025-05-14 02:54:31.693319 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-14 02:54:31.693323 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:54:31.693329 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-14 02:54:31.693334 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.693338 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-14 02:54:31.693348 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.693353 | orchestrator | 2025-05-14 02:54:31.693358 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-05-14 02:54:31.693363 | orchestrator | Wednesday 14 May 2025 02:47:59 +0000 (0:00:01.951) 0:02:02.000 ********* 2025-05-14 02:54:31.693368 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.693373 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:54:31.693378 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.693382 | orchestrator | 2025-05-14 02:54:31.693388 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-05-14 02:54:31.693393 | orchestrator | Wednesday 14 May 2025 02:47:59 +0000 (0:00:00.515) 0:02:02.515 ********* 2025-05-14 02:54:31.693398 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.693403 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.693408 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:54:31.693413 | orchestrator | 2025-05-14 02:54:31.693418 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-05-14 02:54:31.693423 | orchestrator | Wednesday 14 May 2025 02:48:00 +0000 (0:00:01.214) 0:02:03.730 ********* 2025-05-14 02:54:31.693428 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.693439 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.693444 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:54:31.693449 | orchestrator | 2025-05-14 02:54:31.693454 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-05-14 02:54:31.693459 | orchestrator | Wednesday 14 May 2025 02:48:02 +0000 (0:00:02.223) 0:02:05.953 ********* 2025-05-14 02:54:31.693468 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.693473 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.693478 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:54:31.693483 | orchestrator | 2025-05-14 02:54:31.693488 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-14 02:54:31.693493 | orchestrator | Wednesday 14 May 2025 02:48:23 +0000 (0:00:20.775) 0:02:26.729 ********* 2025-05-14 02:54:31.693498 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.693503 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.693508 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:54:31.693513 | orchestrator | 2025-05-14 02:54:31.693518 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-14 02:54:31.693523 | orchestrator | Wednesday 14 May 2025 02:48:37 +0000 (0:00:13.328) 0:02:40.057 ********* 2025-05-14 02:54:31.693527 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:54:31.693532 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.693537 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.693542 | orchestrator | 2025-05-14 02:54:31.693547 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-05-14 02:54:31.693552 | orchestrator | Wednesday 14 May 2025 02:48:38 +0000 (0:00:01.310) 0:02:41.368 ********* 2025-05-14 02:54:31.693557 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.693562 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.693566 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:54:31.693571 | orchestrator | 2025-05-14 02:54:31.693576 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-05-14 02:54:31.693581 | orchestrator | Wednesday 14 May 2025 02:48:48 +0000 (0:00:10.342) 0:02:51.710 ********* 2025-05-14 02:54:31.693586 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:54:31.693591 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.693596 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.693601 | orchestrator | 2025-05-14 02:54:31.693606 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-05-14 02:54:31.693611 | orchestrator | Wednesday 14 May 2025 02:48:50 +0000 (0:00:01.413) 0:02:53.123 ********* 2025-05-14 02:54:31.693615 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:54:31.693620 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.693625 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.693630 | orchestrator | 2025-05-14 02:54:31.693635 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-05-14 02:54:31.693640 | orchestrator | 2025-05-14 02:54:31.693644 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-14 02:54:31.693649 | orchestrator | Wednesday 14 May 2025 02:48:50 +0000 (0:00:00.512) 0:02:53.636 ********* 2025-05-14 02:54:31.693654 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:54:31.693661 | orchestrator | 2025-05-14 02:54:31.693665 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-05-14 02:54:31.693670 | orchestrator | Wednesday 14 May 2025 02:48:51 +0000 (0:00:00.664) 0:02:54.301 ********* 2025-05-14 02:54:31.693675 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-05-14 02:54:31.693680 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-05-14 02:54:31.693685 | orchestrator | 2025-05-14 02:54:31.693690 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-05-14 02:54:31.693694 | orchestrator | Wednesday 14 May 2025 02:48:54 +0000 (0:00:03.589) 0:02:57.890 ********* 2025-05-14 02:54:31.693699 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-05-14 02:54:31.693706 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-05-14 02:54:31.693710 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-05-14 02:54:31.693720 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-05-14 02:54:31.693725 | orchestrator | 2025-05-14 02:54:31.693730 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-05-14 02:54:31.693767 | orchestrator | Wednesday 14 May 2025 02:49:01 +0000 (0:00:06.577) 0:03:04.468 ********* 2025-05-14 02:54:31.693773 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-14 02:54:31.693778 | orchestrator | 2025-05-14 02:54:31.693783 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-05-14 02:54:31.693792 | orchestrator | Wednesday 14 May 2025 02:49:04 +0000 (0:00:03.275) 0:03:07.743 ********* 2025-05-14 02:54:31.693797 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-14 02:54:31.693802 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-05-14 02:54:31.693807 | orchestrator | 2025-05-14 02:54:31.693811 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-05-14 02:54:31.693816 | orchestrator | Wednesday 14 May 2025 02:49:08 +0000 (0:00:04.079) 0:03:11.822 ********* 2025-05-14 02:54:31.693821 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-14 02:54:31.693826 | orchestrator | 2025-05-14 02:54:31.693831 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-05-14 02:54:31.693836 | orchestrator | Wednesday 14 May 2025 02:49:12 +0000 (0:00:03.759) 0:03:15.581 ********* 2025-05-14 02:54:31.693840 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-05-14 02:54:31.693846 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-05-14 02:54:31.693850 | orchestrator | 2025-05-14 02:54:31.693855 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-05-14 02:54:31.693864 | orchestrator | Wednesday 14 May 2025 02:49:20 +0000 (0:00:08.163) 0:03:23.745 ********* 2025-05-14 02:54:31.693875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 02:54:31.693884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 02:54:31.693901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 02:54:31.693912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:54:31.693919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.693926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:54:31.693931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.694186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:54:31.694213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.694219 | orchestrator | 2025-05-14 02:54:31.694224 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-05-14 02:54:31.694229 | orchestrator | Wednesday 14 May 2025 02:49:22 +0000 (0:00:01.434) 0:03:25.179 ********* 2025-05-14 02:54:31.694234 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:54:31.694239 | orchestrator | 2025-05-14 02:54:31.694243 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-05-14 02:54:31.694248 | orchestrator | Wednesday 14 May 2025 02:49:22 +0000 (0:00:00.262) 0:03:25.441 ********* 2025-05-14 02:54:31.694253 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:54:31.694258 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.694263 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.694268 | orchestrator | 2025-05-14 02:54:31.694272 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-05-14 02:54:31.694277 | orchestrator | Wednesday 14 May 2025 02:49:22 +0000 (0:00:00.263) 0:03:25.705 ********* 2025-05-14 02:54:31.694282 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-14 02:54:31.694287 | orchestrator | 2025-05-14 02:54:31.694298 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-05-14 02:54:31.694304 | orchestrator | Wednesday 14 May 2025 02:49:23 +0000 (0:00:00.640) 0:03:26.346 ********* 2025-05-14 02:54:31.694308 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:54:31.694313 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.694318 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.694323 | orchestrator | 2025-05-14 02:54:31.694328 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-14 02:54:31.694333 | orchestrator | Wednesday 14 May 2025 02:49:23 +0000 (0:00:00.273) 0:03:26.620 ********* 2025-05-14 02:54:31.694338 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:54:31.694343 | orchestrator | 2025-05-14 02:54:31.694348 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-05-14 02:54:31.694353 | orchestrator | Wednesday 14 May 2025 02:49:24 +0000 (0:00:00.711) 0:03:27.331 ********* 2025-05-14 02:54:31.694358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 02:54:31.694370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 02:54:31.694418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 02:54:31.694425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:54:31.694431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:54:31.694441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:54:31.694446 | orchestrator | 2025-05-14 02:54:31.694451 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-05-14 02:54:31.694456 | orchestrator | Wednesday 14 May 2025 02:49:26 +0000 (0:00:02.531) 0:03:29.862 ********* 2025-05-14 02:54:31.694464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-14 02:54:31.694469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.694478 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:54:31.694484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-14 02:54:31.694493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.694498 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.694504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-14 02:54:31.694512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.694518 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.694523 | orchestrator | 2025-05-14 02:54:31.694528 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-05-14 02:54:31.694533 | orchestrator | Wednesday 14 May 2025 02:49:27 +0000 (0:00:00.560) 0:03:30.423 ********* 2025-05-14 02:54:31.694788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-14 02:54:31.694926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.694944 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:54:31.694957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-14 02:54:31.694985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.694995 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.695045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-14 02:54:31.695064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.695073 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.695081 | orchestrator | 2025-05-14 02:54:31.695090 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-05-14 02:54:31.695100 | orchestrator | Wednesday 14 May 2025 02:49:28 +0000 (0:00:00.955) 0:03:31.378 ********* 2025-05-14 02:54:31.695109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 02:54:31.695123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 02:54:31.695139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 02:54:31.695154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:54:31.695163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.695173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:54:31.695185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.695198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:54:31.695212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.695220 | orchestrator | 2025-05-14 02:54:31.695228 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-05-14 02:54:31.695236 | orchestrator | Wednesday 14 May 2025 02:49:30 +0000 (0:00:02.579) 0:03:33.958 ********* 2025-05-14 02:54:31.695244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 02:54:31.695257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 02:54:31.695273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 02:54:31.695288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:54:31.695296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.695305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:54:31.695313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.695326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:54:31.695337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.695351 | orchestrator | 2025-05-14 02:54:31.695359 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-05-14 02:54:31.695367 | orchestrator | Wednesday 14 May 2025 02:49:37 +0000 (0:00:06.332) 0:03:40.291 ********* 2025-05-14 02:54:31.695376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-14 02:54:31.695384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.695392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.695400 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:54:31.695413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-14 02:54:31.695434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.695443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.695451 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.695459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-14 02:54:31.695469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.695482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.695500 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.695509 | orchestrator | 2025-05-14 02:54:31.695516 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-05-14 02:54:31.695525 | orchestrator | Wednesday 14 May 2025 02:49:38 +0000 (0:00:00.892) 0:03:41.183 ********* 2025-05-14 02:54:31.695533 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:54:31.695541 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:54:31.695549 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:54:31.695556 | orchestrator | 2025-05-14 02:54:31.695564 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-05-14 02:54:31.695572 | orchestrator | Wednesday 14 May 2025 02:49:39 +0000 (0:00:01.681) 0:03:42.865 ********* 2025-05-14 02:54:31.695584 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:54:31.695592 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.695600 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.695608 | orchestrator | 2025-05-14 02:54:31.695617 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-05-14 02:54:31.695625 | orchestrator | Wednesday 14 May 2025 02:49:40 +0000 (0:00:00.450) 0:03:43.315 ********* 2025-05-14 02:54:31.695634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 02:54:31.695644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 02:54:31.695664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-14 02:54:31.695678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:54:31.695684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.695689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:54:31.695695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.695700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-14 02:54:31.695713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.695718 | orchestrator | 2025-05-14 02:54:31.695723 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-14 02:54:31.695729 | orchestrator | Wednesday 14 May 2025 02:49:42 +0000 (0:00:01.903) 0:03:45.219 ********* 2025-05-14 02:54:31.695734 | orchestrator | 2025-05-14 02:54:31.695739 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-14 02:54:31.695744 | orchestrator | Wednesday 14 May 2025 02:49:42 +0000 (0:00:00.304) 0:03:45.524 ********* 2025-05-14 02:54:31.695749 | orchestrator | 2025-05-14 02:54:31.695754 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-14 02:54:31.695759 | orchestrator | Wednesday 14 May 2025 02:49:42 +0000 (0:00:00.113) 0:03:45.637 ********* 2025-05-14 02:54:31.695764 | orchestrator | 2025-05-14 02:54:31.695772 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-05-14 02:54:31.695777 | orchestrator | Wednesday 14 May 2025 02:49:42 +0000 (0:00:00.230) 0:03:45.867 ********* 2025-05-14 02:54:31.695782 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:54:31.695787 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:54:31.695792 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:54:31.695797 | orchestrator | 2025-05-14 02:54:31.695802 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-05-14 02:54:31.695807 | orchestrator | Wednesday 14 May 2025 02:50:05 +0000 (0:00:22.607) 0:04:08.474 ********* 2025-05-14 02:54:31.695812 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:54:31.695817 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:54:31.695822 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:54:31.695827 | orchestrator | 2025-05-14 02:54:31.695832 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-05-14 02:54:31.695837 | orchestrator | 2025-05-14 02:54:31.695842 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-14 02:54:31.695847 | orchestrator | Wednesday 14 May 2025 02:50:11 +0000 (0:00:05.781) 0:04:14.256 ********* 2025-05-14 02:54:31.695853 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:54:31.695860 | orchestrator | 2025-05-14 02:54:31.695865 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-14 02:54:31.695870 | orchestrator | Wednesday 14 May 2025 02:50:12 +0000 (0:00:01.390) 0:04:15.647 ********* 2025-05-14 02:54:31.695875 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:54:31.695880 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:54:31.695885 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:54:31.695890 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:54:31.695895 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.695900 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.695905 | orchestrator | 2025-05-14 02:54:31.695910 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-05-14 02:54:31.695915 | orchestrator | Wednesday 14 May 2025 02:50:13 +0000 (0:00:00.733) 0:04:16.381 ********* 2025-05-14 02:54:31.695927 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:54:31.695932 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.695937 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.695942 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:54:31.695947 | orchestrator | 2025-05-14 02:54:31.695952 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-14 02:54:31.695957 | orchestrator | Wednesday 14 May 2025 02:50:14 +0000 (0:00:01.168) 0:04:17.549 ********* 2025-05-14 02:54:31.695963 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-05-14 02:54:31.695968 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-05-14 02:54:31.695973 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-05-14 02:54:31.695978 | orchestrator | 2025-05-14 02:54:31.695983 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-14 02:54:31.695988 | orchestrator | Wednesday 14 May 2025 02:50:15 +0000 (0:00:00.913) 0:04:18.463 ********* 2025-05-14 02:54:31.695994 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-05-14 02:54:31.696015 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-05-14 02:54:31.696024 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-05-14 02:54:31.696029 | orchestrator | 2025-05-14 02:54:31.696035 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-14 02:54:31.696040 | orchestrator | Wednesday 14 May 2025 02:50:16 +0000 (0:00:01.335) 0:04:19.798 ********* 2025-05-14 02:54:31.696045 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-05-14 02:54:31.696050 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:54:31.696055 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-05-14 02:54:31.696060 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:54:31.696065 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-05-14 02:54:31.696070 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:54:31.696075 | orchestrator | 2025-05-14 02:54:31.696080 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-05-14 02:54:31.696085 | orchestrator | Wednesday 14 May 2025 02:50:17 +0000 (0:00:00.640) 0:04:20.439 ********* 2025-05-14 02:54:31.696090 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-14 02:54:31.696095 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-14 02:54:31.696103 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:54:31.696111 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-14 02:54:31.696124 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-14 02:54:31.696132 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-14 02:54:31.696142 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-14 02:54:31.696150 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-14 02:54:31.696158 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.696167 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-14 02:54:31.696172 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-14 02:54:31.696177 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.696182 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-14 02:54:31.696187 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-14 02:54:31.696192 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-14 02:54:31.696197 | orchestrator | 2025-05-14 02:54:31.696207 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-05-14 02:54:31.696212 | orchestrator | Wednesday 14 May 2025 02:50:18 +0000 (0:00:01.198) 0:04:21.638 ********* 2025-05-14 02:54:31.696223 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:54:31.696228 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.696233 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.696238 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:54:31.696243 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:54:31.696248 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:54:31.696253 | orchestrator | 2025-05-14 02:54:31.696259 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-05-14 02:54:31.696264 | orchestrator | Wednesday 14 May 2025 02:50:19 +0000 (0:00:01.135) 0:04:22.774 ********* 2025-05-14 02:54:31.696269 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:54:31.696274 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.696279 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.696284 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:54:31.696289 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:54:31.696294 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:54:31.696299 | orchestrator | 2025-05-14 02:54:31.696304 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-05-14 02:54:31.696309 | orchestrator | Wednesday 14 May 2025 02:50:21 +0000 (0:00:01.905) 0:04:24.679 ********* 2025-05-14 02:54:31.696314 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-14 02:54:31.696322 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-14 02:54:31.696331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:54:31.696345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:54:31.696352 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-14 02:54:31.696357 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-14 02:54:31.696363 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.696369 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.696377 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:54:31.696387 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-14 02:54:31.696402 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.696408 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.696413 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.696419 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:54:31.696424 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.696430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-14 02:54:31.696442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.696452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:54:31.696458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:54:31.696464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:54:31.696469 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-14 02:54:31.696474 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.696483 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.696494 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:54:31.696504 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.696510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:54:31.696515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:54:31.696522 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-14 02:54:31.696532 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-14 02:54:31.696545 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.696551 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.696556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:54:31.696562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.696567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.696579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-14 02:54:31.696585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.697071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:54:31.697100 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-14 02:54:31.697111 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.697121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-14 02:54:31.697130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.697159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:54:31.697171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:54:31.697177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.697182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.697187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:54:31.697193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.697206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.697212 | orchestrator | 2025-05-14 02:54:31.697217 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-14 02:54:31.697223 | orchestrator | Wednesday 14 May 2025 02:50:24 +0000 (0:00:02.674) 0:04:27.353 ********* 2025-05-14 02:54:31.697229 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-14 02:54:31.697235 | orchestrator | 2025-05-14 02:54:31.697240 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-05-14 02:54:31.697245 | orchestrator | Wednesday 14 May 2025 02:50:25 +0000 (0:00:01.498) 0:04:28.852 ********* 2025-05-14 02:54:31.697255 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-14 02:54:31.697262 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-14 02:54:31.697267 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-14 02:54:31.697277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-14 02:54:31.697286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-14 02:54:31.697297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-14 02:54:31.697302 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-14 02:54:31.697307 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-14 02:54:31.697313 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-14 02:54:31.697322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:54:31.697327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:54:31.697336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:54:31.697347 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-14 02:54:31.697354 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-14 02:54:31.697359 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-14 02:54:31.697368 | orchestrator | 2025-05-14 02:54:31.697373 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-05-14 02:54:31.697378 | orchestrator | Wednesday 14 May 2025 02:50:29 +0000 (0:00:03.894) 0:04:32.746 ********* 2025-05-14 02:54:31.697384 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:54:31.697393 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:54:31.697401 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.697407 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:54:31.697413 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:54:31.697426 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:54:31.697432 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.697437 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:54:31.697446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.697454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.697459 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.697465 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:54:31.697470 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:54:31.697481 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.697486 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:54:31.697491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.697500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.697505 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:54:31.697514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.697519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.697525 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.697530 | orchestrator | 2025-05-14 02:54:31.697535 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-05-14 02:54:31.697544 | orchestrator | Wednesday 14 May 2025 02:50:31 +0000 (0:00:01.683) 0:04:34.430 ********* 2025-05-14 02:54:31.697550 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:54:31.697555 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:54:31.697560 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.697569 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:54:31.697577 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:54:31.697583 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:54:31.697593 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.697599 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:54:31.697605 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:54:31.697612 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:54:31.697621 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.697627 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:54:31.697637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.697643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.697653 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:54:31.697659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.697665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.697671 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.697677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.697686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.697692 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.697698 | orchestrator | 2025-05-14 02:54:31.697704 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-14 02:54:31.697709 | orchestrator | Wednesday 14 May 2025 02:50:33 +0000 (0:00:02.529) 0:04:36.960 ********* 2025-05-14 02:54:31.697714 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:54:31.697719 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.697724 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.697730 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-14 02:54:31.697735 | orchestrator | 2025-05-14 02:54:31.697740 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-05-14 02:54:31.697745 | orchestrator | Wednesday 14 May 2025 02:50:35 +0000 (0:00:01.204) 0:04:38.164 ********* 2025-05-14 02:54:31.697758 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-14 02:54:31.697763 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-14 02:54:31.697768 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-14 02:54:31.697773 | orchestrator | 2025-05-14 02:54:31.697778 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-05-14 02:54:31.697783 | orchestrator | Wednesday 14 May 2025 02:50:36 +0000 (0:00:00.828) 0:04:38.993 ********* 2025-05-14 02:54:31.697788 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-14 02:54:31.697793 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-14 02:54:31.697798 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-14 02:54:31.697803 | orchestrator | 2025-05-14 02:54:31.697808 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-05-14 02:54:31.697813 | orchestrator | Wednesday 14 May 2025 02:50:36 +0000 (0:00:00.737) 0:04:39.731 ********* 2025-05-14 02:54:31.697818 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:54:31.697823 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:54:31.697828 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:54:31.697833 | orchestrator | 2025-05-14 02:54:31.697838 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-05-14 02:54:31.697843 | orchestrator | Wednesday 14 May 2025 02:50:37 +0000 (0:00:00.641) 0:04:40.372 ********* 2025-05-14 02:54:31.697848 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:54:31.697853 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:54:31.697858 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:54:31.697863 | orchestrator | 2025-05-14 02:54:31.697868 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-05-14 02:54:31.697873 | orchestrator | Wednesday 14 May 2025 02:50:37 +0000 (0:00:00.500) 0:04:40.872 ********* 2025-05-14 02:54:31.697878 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-14 02:54:31.697883 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-14 02:54:31.697888 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-14 02:54:31.697893 | orchestrator | 2025-05-14 02:54:31.697898 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-05-14 02:54:31.697903 | orchestrator | Wednesday 14 May 2025 02:50:39 +0000 (0:00:01.376) 0:04:42.249 ********* 2025-05-14 02:54:31.697908 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-14 02:54:31.697913 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-14 02:54:31.697918 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-14 02:54:31.697923 | orchestrator | 2025-05-14 02:54:31.697928 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-05-14 02:54:31.697933 | orchestrator | Wednesday 14 May 2025 02:50:40 +0000 (0:00:01.362) 0:04:43.611 ********* 2025-05-14 02:54:31.697938 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-14 02:54:31.697945 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-14 02:54:31.697954 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-14 02:54:31.697961 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-05-14 02:54:31.697975 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-05-14 02:54:31.697983 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-05-14 02:54:31.697991 | orchestrator | 2025-05-14 02:54:31.698094 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-05-14 02:54:31.698108 | orchestrator | Wednesday 14 May 2025 02:50:46 +0000 (0:00:05.481) 0:04:49.093 ********* 2025-05-14 02:54:31.698116 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:54:31.698124 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:54:31.698132 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:54:31.698140 | orchestrator | 2025-05-14 02:54:31.698149 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-05-14 02:54:31.698157 | orchestrator | Wednesday 14 May 2025 02:50:46 +0000 (0:00:00.467) 0:04:49.560 ********* 2025-05-14 02:54:31.698174 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:54:31.698182 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:54:31.698190 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:54:31.698198 | orchestrator | 2025-05-14 02:54:31.698206 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-05-14 02:54:31.698211 | orchestrator | Wednesday 14 May 2025 02:50:47 +0000 (0:00:00.476) 0:04:50.037 ********* 2025-05-14 02:54:31.698216 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:54:31.698221 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:54:31.698226 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:54:31.698231 | orchestrator | 2025-05-14 02:54:31.698236 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-05-14 02:54:31.698241 | orchestrator | Wednesday 14 May 2025 02:50:48 +0000 (0:00:01.373) 0:04:51.410 ********* 2025-05-14 02:54:31.698247 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-14 02:54:31.698258 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-14 02:54:31.698263 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-14 02:54:31.698268 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-14 02:54:31.698274 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-14 02:54:31.698279 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-14 02:54:31.698284 | orchestrator | 2025-05-14 02:54:31.698289 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-05-14 02:54:31.698301 | orchestrator | Wednesday 14 May 2025 02:50:51 +0000 (0:00:03.516) 0:04:54.927 ********* 2025-05-14 02:54:31.698307 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-14 02:54:31.698312 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-14 02:54:31.698317 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-14 02:54:31.698322 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-14 02:54:31.698327 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:54:31.698332 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-14 02:54:31.698337 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:54:31.698342 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-14 02:54:31.698347 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:54:31.698352 | orchestrator | 2025-05-14 02:54:31.698357 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-05-14 02:54:31.698362 | orchestrator | Wednesday 14 May 2025 02:50:55 +0000 (0:00:03.362) 0:04:58.289 ********* 2025-05-14 02:54:31.698367 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:54:31.698372 | orchestrator | 2025-05-14 02:54:31.698377 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-05-14 02:54:31.698382 | orchestrator | Wednesday 14 May 2025 02:50:55 +0000 (0:00:00.120) 0:04:58.410 ********* 2025-05-14 02:54:31.698387 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:54:31.698392 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:54:31.698397 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:54:31.698402 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:54:31.698407 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.698412 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.698417 | orchestrator | 2025-05-14 02:54:31.698422 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-05-14 02:54:31.698427 | orchestrator | Wednesday 14 May 2025 02:50:56 +0000 (0:00:00.901) 0:04:59.311 ********* 2025-05-14 02:54:31.698438 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-14 02:54:31.698443 | orchestrator | 2025-05-14 02:54:31.698448 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-05-14 02:54:31.698453 | orchestrator | Wednesday 14 May 2025 02:50:56 +0000 (0:00:00.396) 0:04:59.708 ********* 2025-05-14 02:54:31.698458 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:54:31.698463 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:54:31.698468 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:54:31.698473 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:54:31.698478 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.698482 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.698487 | orchestrator | 2025-05-14 02:54:31.698492 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-05-14 02:54:31.698497 | orchestrator | Wednesday 14 May 2025 02:50:57 +0000 (0:00:00.714) 0:05:00.422 ********* 2025-05-14 02:54:31.698503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:54:31.698512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:54:31.698524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:54:31.698530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:54:31.698535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:54:31.698547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:54:31.698553 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-14 02:54:31.698562 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-14 02:54:31.698572 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-14 02:54:31.698581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-14 02:54:31.698587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-14 02:54:31.698592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.698598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.698607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:54:31.698627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:54:31.698633 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-14 02:54:31.698642 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.698647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-14 02:54:31.698652 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.698657 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:54:31.698665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.698675 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.698680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:54:31.698690 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-14 02:54:31.698695 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.698700 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.698705 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:54:31.698713 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.698718 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-14 02:54:31.698728 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.698740 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.698745 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:54:31.698750 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.698755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:54:31.698763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.698774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.698785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:54:31.698790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.698796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.698801 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-14 02:54:31.698809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:54:31.698817 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.698827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.698832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.698837 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-14 02:54:31.698842 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.698850 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-14 02:54:31.698868 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.698873 | orchestrator | 2025-05-14 02:54:31.698878 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-05-14 02:54:31.698883 | orchestrator | Wednesday 14 May 2025 02:51:01 +0000 (0:00:04.063) 0:05:04.485 ********* 2025-05-14 02:54:31.698888 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:54:31.698893 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:54:31.698898 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.698903 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.698915 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:54:31.698929 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.698934 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:54:31.698939 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:54:31.698944 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.698949 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.698957 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:54:31.698966 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.698976 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:54:31.698981 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:54:31.698986 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.698992 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.699016 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:54:31.699034 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.699048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:54:31.699057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:54:31.699065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:54:31.699073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:54:31.699085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:54:31.699100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:54:31.699114 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-14 02:54:31.699122 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.699130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-14 02:54:31.699138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.699147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:54:31.699164 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-14 02:54:31.699179 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.699188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-14 02:54:31.699196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.699204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:54:31.699212 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-14 02:54:31.699231 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.699244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-14 02:54:31.699252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.699261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:54:31.699269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:54:31.699277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.699292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.699301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:54:31.699311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.699316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.699321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:54:31.699326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.699336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.699341 | orchestrator | 2025-05-14 02:54:31.699346 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-05-14 02:54:31.699357 | orchestrator | Wednesday 14 May 2025 02:51:08 +0000 (0:00:07.124) 0:05:11.609 ********* 2025-05-14 02:54:31.699362 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:54:31.699367 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:54:31.699372 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:54:31.699377 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:54:31.699381 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.699386 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.699391 | orchestrator | 2025-05-14 02:54:31.699395 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-05-14 02:54:31.699400 | orchestrator | Wednesday 14 May 2025 02:51:10 +0000 (0:00:01.694) 0:05:13.304 ********* 2025-05-14 02:54:31.699405 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-14 02:54:31.699410 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-14 02:54:31.699415 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-14 02:54:31.699420 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-14 02:54:31.699425 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:54:31.699433 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-14 02:54:31.699438 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-14 02:54:31.699442 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.699447 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-14 02:54:31.699452 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-14 02:54:31.699457 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.699462 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-14 02:54:31.699466 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-14 02:54:31.699471 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-14 02:54:31.699476 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-14 02:54:31.699481 | orchestrator | 2025-05-14 02:54:31.699486 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-05-14 02:54:31.699490 | orchestrator | Wednesday 14 May 2025 02:51:15 +0000 (0:00:05.142) 0:05:18.446 ********* 2025-05-14 02:54:31.699495 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:54:31.699500 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:54:31.699504 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:54:31.699514 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:54:31.699518 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.699523 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.699528 | orchestrator | 2025-05-14 02:54:31.699532 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-05-14 02:54:31.699537 | orchestrator | Wednesday 14 May 2025 02:51:16 +0000 (0:00:00.909) 0:05:19.355 ********* 2025-05-14 02:54:31.699542 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-14 02:54:31.699547 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-14 02:54:31.699552 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-14 02:54:31.699557 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-14 02:54:31.699562 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-14 02:54:31.699566 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-14 02:54:31.699571 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-14 02:54:31.699576 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-14 02:54:31.699580 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-14 02:54:31.699585 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-14 02:54:31.699590 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.699595 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-14 02:54:31.699600 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.699608 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-14 02:54:31.699616 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:54:31.699623 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-14 02:54:31.699631 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-14 02:54:31.699642 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-14 02:54:31.699649 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-14 02:54:31.699656 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-14 02:54:31.699663 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-14 02:54:31.699670 | orchestrator | 2025-05-14 02:54:31.699678 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-05-14 02:54:31.699685 | orchestrator | Wednesday 14 May 2025 02:51:24 +0000 (0:00:08.309) 0:05:27.665 ********* 2025-05-14 02:54:31.699692 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-14 02:54:31.699699 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-14 02:54:31.699713 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_2025-05-14 02:54:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:54:31.699721 | orchestrator | config.j2', 'dest': 'sshd_config'})  2025-05-14 02:54:31.699734 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-14 02:54:31.699742 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-14 02:54:31.699749 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-14 02:54:31.699756 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-14 02:54:31.699764 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-14 02:54:31.699772 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-14 02:54:31.699779 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-14 02:54:31.699788 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-14 02:54:31.699795 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-14 02:54:31.699803 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-14 02:54:31.699811 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.699819 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-14 02:54:31.699826 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:54:31.699834 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-14 02:54:31.699843 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.699851 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-14 02:54:31.699859 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-14 02:54:31.699866 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-14 02:54:31.699873 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-14 02:54:31.699881 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-14 02:54:31.699888 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-14 02:54:31.699896 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-14 02:54:31.699903 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-14 02:54:31.699911 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-14 02:54:31.699919 | orchestrator | 2025-05-14 02:54:31.699926 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-05-14 02:54:31.699934 | orchestrator | Wednesday 14 May 2025 02:51:35 +0000 (0:00:11.095) 0:05:38.761 ********* 2025-05-14 02:54:31.699942 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:54:31.699949 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:54:31.699957 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:54:31.699964 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:54:31.699972 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.699979 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.699986 | orchestrator | 2025-05-14 02:54:31.699994 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-05-14 02:54:31.700048 | orchestrator | Wednesday 14 May 2025 02:51:36 +0000 (0:00:00.739) 0:05:39.500 ********* 2025-05-14 02:54:31.700056 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:54:31.700064 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:54:31.700072 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:54:31.700080 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:54:31.700087 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.700095 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.700103 | orchestrator | 2025-05-14 02:54:31.700110 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-05-14 02:54:31.700129 | orchestrator | Wednesday 14 May 2025 02:51:37 +0000 (0:00:00.936) 0:05:40.437 ********* 2025-05-14 02:54:31.700137 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.700145 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:54:31.700153 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.700160 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:54:31.700174 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:54:31.700181 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:54:31.700188 | orchestrator | 2025-05-14 02:54:31.700196 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-05-14 02:54:31.700203 | orchestrator | Wednesday 14 May 2025 02:51:40 +0000 (0:00:02.885) 0:05:43.322 ********* 2025-05-14 02:54:31.700220 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:54:31.700230 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:54:31.700239 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:54:31.700248 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:54:31.700256 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.700275 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.700375 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.700385 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:54:31.700393 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.700401 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.700409 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:54:31.700426 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.700440 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:54:31.700448 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.700456 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:54:31.700465 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.700472 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:54:31.700480 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.700494 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.700506 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:54:31.700518 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.700526 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.700535 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.700543 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.700557 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.700565 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:54:31.700572 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:54:31.700584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:54:31.700595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:54:31.700603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.700610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.700618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:54:31.700631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.700645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.700656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.700663 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:54:31.700671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:54:31.700679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:54:31.700686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.700698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.700706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:54:31.700717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.700729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.700737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.700744 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.700752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:54:31.700764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:54:31.700772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.700784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.700796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:54:31.700804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.700811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.700824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.700832 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.700839 | orchestrator | 2025-05-14 02:54:31.700847 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-05-14 02:54:31.700854 | orchestrator | Wednesday 14 May 2025 02:51:42 +0000 (0:00:01.893) 0:05:45.215 ********* 2025-05-14 02:54:31.700861 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-05-14 02:54:31.700869 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-05-14 02:54:31.700876 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:54:31.700884 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-05-14 02:54:31.700891 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-05-14 02:54:31.700898 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:54:31.700905 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-05-14 02:54:31.700913 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-05-14 02:54:31.700920 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-05-14 02:54:31.700927 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-05-14 02:54:31.700935 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:54:31.700942 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-05-14 02:54:31.700949 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-05-14 02:54:31.700957 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:54:31.700964 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.700974 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-05-14 02:54:31.700981 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-05-14 02:54:31.700988 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.700995 | orchestrator | 2025-05-14 02:54:31.701017 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-05-14 02:54:31.701023 | orchestrator | Wednesday 14 May 2025 02:51:43 +0000 (0:00:01.027) 0:05:46.243 ********* 2025-05-14 02:54:31.701035 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-14 02:54:31.701048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:54:31.701056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:54:31.701064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:54:31.701075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:54:31.701086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-14 02:54:31.701093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-14 02:54:31.701106 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-14 02:54:31.701113 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-14 02:54:31.701120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-14 02:54:31.701131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.701143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:54:31.701156 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-14 02:54:31.701165 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.701173 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.701180 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:54:31.701188 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.701199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-14 02:54:31.701212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.701222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:54:31.701227 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-14 02:54:31.701232 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.701236 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.701241 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:54:31.701249 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.701258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-14 02:54:31.701271 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-14 02:54:31.701279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.701287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:54:31.701295 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.701302 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-14 02:54:31.701313 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-14 02:54:31.701325 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.701339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:54:31.701347 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-14 02:54:31.701354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.701362 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.701373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.701395 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-14 02:54:31.701403 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.701411 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-14 02:54:31.701418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:54:31.701426 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.701435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-14 02:54:31.701447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.701453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.701457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.701462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-14 02:54:31.701467 | orchestrator | 2025-05-14 02:54:31.701471 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-14 02:54:31.701476 | orchestrator | Wednesday 14 May 2025 02:51:46 +0000 (0:00:03.583) 0:05:49.827 ********* 2025-05-14 02:54:31.701481 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:54:31.701485 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:54:31.701490 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:54:31.701494 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:54:31.701499 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.701503 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.701507 | orchestrator | 2025-05-14 02:54:31.701512 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-14 02:54:31.701517 | orchestrator | Wednesday 14 May 2025 02:51:47 +0000 (0:00:00.755) 0:05:50.582 ********* 2025-05-14 02:54:31.701521 | orchestrator | 2025-05-14 02:54:31.701526 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-14 02:54:31.701535 | orchestrator | Wednesday 14 May 2025 02:51:47 +0000 (0:00:00.102) 0:05:50.685 ********* 2025-05-14 02:54:31.701540 | orchestrator | 2025-05-14 02:54:31.701544 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-14 02:54:31.701548 | orchestrator | Wednesday 14 May 2025 02:51:47 +0000 (0:00:00.236) 0:05:50.921 ********* 2025-05-14 02:54:31.701553 | orchestrator | 2025-05-14 02:54:31.701557 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-14 02:54:31.701562 | orchestrator | Wednesday 14 May 2025 02:51:48 +0000 (0:00:00.111) 0:05:51.033 ********* 2025-05-14 02:54:31.701566 | orchestrator | 2025-05-14 02:54:31.701571 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-14 02:54:31.701579 | orchestrator | Wednesday 14 May 2025 02:51:48 +0000 (0:00:00.298) 0:05:51.331 ********* 2025-05-14 02:54:31.701583 | orchestrator | 2025-05-14 02:54:31.701588 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-14 02:54:31.701592 | orchestrator | Wednesday 14 May 2025 02:51:48 +0000 (0:00:00.111) 0:05:51.443 ********* 2025-05-14 02:54:31.701597 | orchestrator | 2025-05-14 02:54:31.701601 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-05-14 02:54:31.701605 | orchestrator | Wednesday 14 May 2025 02:51:48 +0000 (0:00:00.325) 0:05:51.768 ********* 2025-05-14 02:54:31.701610 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:54:31.701614 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:54:31.701619 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:54:31.701623 | orchestrator | 2025-05-14 02:54:31.701628 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-05-14 02:54:31.701632 | orchestrator | Wednesday 14 May 2025 02:52:01 +0000 (0:00:12.791) 0:06:04.560 ********* 2025-05-14 02:54:31.701639 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:54:31.701643 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:54:31.701648 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:54:31.701652 | orchestrator | 2025-05-14 02:54:31.701657 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-05-14 02:54:31.701662 | orchestrator | Wednesday 14 May 2025 02:52:17 +0000 (0:00:15.984) 0:06:20.544 ********* 2025-05-14 02:54:31.701670 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:54:31.701677 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:54:31.701685 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:54:31.701693 | orchestrator | 2025-05-14 02:54:31.701700 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-05-14 02:54:31.701707 | orchestrator | Wednesday 14 May 2025 02:52:33 +0000 (0:00:16.431) 0:06:36.976 ********* 2025-05-14 02:54:31.701715 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:54:31.701722 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:54:31.701730 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:54:31.701736 | orchestrator | 2025-05-14 02:54:31.701741 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-05-14 02:54:31.701745 | orchestrator | Wednesday 14 May 2025 02:53:00 +0000 (0:00:26.928) 0:07:03.904 ********* 2025-05-14 02:54:31.701749 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:54:31.701754 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:54:31.701758 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:54:31.701763 | orchestrator | 2025-05-14 02:54:31.701767 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-05-14 02:54:31.701772 | orchestrator | Wednesday 14 May 2025 02:53:01 +0000 (0:00:00.741) 0:07:04.646 ********* 2025-05-14 02:54:31.701776 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:54:31.701780 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:54:31.701785 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:54:31.701789 | orchestrator | 2025-05-14 02:54:31.701794 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-05-14 02:54:31.701798 | orchestrator | Wednesday 14 May 2025 02:53:02 +0000 (0:00:00.957) 0:07:05.604 ********* 2025-05-14 02:54:31.701809 | orchestrator | changed: [testbed-node-5] 2025-05-14 02:54:31.701814 | orchestrator | changed: [testbed-node-4] 2025-05-14 02:54:31.701818 | orchestrator | changed: [testbed-node-3] 2025-05-14 02:54:31.701823 | orchestrator | 2025-05-14 02:54:31.701827 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-05-14 02:54:31.701831 | orchestrator | Wednesday 14 May 2025 02:53:25 +0000 (0:00:23.049) 0:07:28.653 ********* 2025-05-14 02:54:31.701836 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:54:31.701840 | orchestrator | 2025-05-14 02:54:31.701845 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-05-14 02:54:31.701849 | orchestrator | Wednesday 14 May 2025 02:53:25 +0000 (0:00:00.125) 0:07:28.779 ********* 2025-05-14 02:54:31.701854 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:54:31.701858 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:54:31.701863 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:54:31.701867 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.701871 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.701876 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-05-14 02:54:31.701881 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-05-14 02:54:31.701885 | orchestrator | 2025-05-14 02:54:31.701890 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-05-14 02:54:31.701894 | orchestrator | Wednesday 14 May 2025 02:53:48 +0000 (0:00:22.306) 0:07:51.085 ********* 2025-05-14 02:54:31.701899 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:54:31.701903 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:54:31.701907 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.701912 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:54:31.701916 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:54:31.701921 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.701925 | orchestrator | 2025-05-14 02:54:31.701929 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-05-14 02:54:31.701934 | orchestrator | Wednesday 14 May 2025 02:53:57 +0000 (0:00:09.267) 0:08:00.353 ********* 2025-05-14 02:54:31.701939 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:54:31.701943 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:54:31.701947 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.701952 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:54:31.701956 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.701961 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2025-05-14 02:54:31.701965 | orchestrator | 2025-05-14 02:54:31.701969 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-14 02:54:31.701974 | orchestrator | Wednesday 14 May 2025 02:54:00 +0000 (0:00:03.358) 0:08:03.711 ********* 2025-05-14 02:54:31.701978 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-05-14 02:54:31.701983 | orchestrator | 2025-05-14 02:54:31.701987 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-14 02:54:31.701995 | orchestrator | Wednesday 14 May 2025 02:54:11 +0000 (0:00:10.912) 0:08:14.623 ********* 2025-05-14 02:54:31.702045 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-05-14 02:54:31.702050 | orchestrator | 2025-05-14 02:54:31.702054 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-05-14 02:54:31.702058 | orchestrator | Wednesday 14 May 2025 02:54:12 +0000 (0:00:01.061) 0:08:15.685 ********* 2025-05-14 02:54:31.702063 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:54:31.702067 | orchestrator | 2025-05-14 02:54:31.702072 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-05-14 02:54:31.702076 | orchestrator | Wednesday 14 May 2025 02:54:13 +0000 (0:00:01.194) 0:08:16.879 ********* 2025-05-14 02:54:31.702082 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-05-14 02:54:31.702097 | orchestrator | 2025-05-14 02:54:31.702105 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-05-14 02:54:31.702229 | orchestrator | Wednesday 14 May 2025 02:54:23 +0000 (0:00:09.267) 0:08:26.147 ********* 2025-05-14 02:54:31.702315 | orchestrator | ok: [testbed-node-3] 2025-05-14 02:54:31.702323 | orchestrator | ok: [testbed-node-4] 2025-05-14 02:54:31.702328 | orchestrator | ok: [testbed-node-5] 2025-05-14 02:54:31.702333 | orchestrator | ok: [testbed-node-0] 2025-05-14 02:54:31.702337 | orchestrator | ok: [testbed-node-1] 2025-05-14 02:54:31.702341 | orchestrator | ok: [testbed-node-2] 2025-05-14 02:54:31.702346 | orchestrator | 2025-05-14 02:54:31.702352 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-05-14 02:54:31.702356 | orchestrator | 2025-05-14 02:54:31.702361 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-05-14 02:54:31.702366 | orchestrator | Wednesday 14 May 2025 02:54:25 +0000 (0:00:02.032) 0:08:28.180 ********* 2025-05-14 02:54:31.702371 | orchestrator | changed: [testbed-node-0] 2025-05-14 02:54:31.702376 | orchestrator | changed: [testbed-node-1] 2025-05-14 02:54:31.702380 | orchestrator | changed: [testbed-node-2] 2025-05-14 02:54:31.702384 | orchestrator | 2025-05-14 02:54:31.702389 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-05-14 02:54:31.702393 | orchestrator | 2025-05-14 02:54:31.702397 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-05-14 02:54:31.702401 | orchestrator | Wednesday 14 May 2025 02:54:26 +0000 (0:00:01.000) 0:08:29.180 ********* 2025-05-14 02:54:31.702405 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:54:31.702409 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.702413 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.702417 | orchestrator | 2025-05-14 02:54:31.702421 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-05-14 02:54:31.702425 | orchestrator | 2025-05-14 02:54:31.702429 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-05-14 02:54:31.702433 | orchestrator | Wednesday 14 May 2025 02:54:26 +0000 (0:00:00.760) 0:08:29.940 ********* 2025-05-14 02:54:31.702437 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-05-14 02:54:31.702442 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-05-14 02:54:31.702446 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-05-14 02:54:31.702451 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-05-14 02:54:31.702455 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-05-14 02:54:31.702459 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-05-14 02:54:31.702463 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-05-14 02:54:31.702466 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-05-14 02:54:31.702471 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-05-14 02:54:31.702475 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-05-14 02:54:31.702479 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-05-14 02:54:31.702483 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-05-14 02:54:31.702487 | orchestrator | skipping: [testbed-node-3] 2025-05-14 02:54:31.702491 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-05-14 02:54:31.702495 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-05-14 02:54:31.702499 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-05-14 02:54:31.702503 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-05-14 02:54:31.702507 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-05-14 02:54:31.702511 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-05-14 02:54:31.702514 | orchestrator | skipping: [testbed-node-4] 2025-05-14 02:54:31.702518 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-05-14 02:54:31.702542 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-05-14 02:54:31.702546 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-05-14 02:54:31.702550 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-05-14 02:54:31.702554 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-05-14 02:54:31.702558 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-05-14 02:54:31.702562 | orchestrator | skipping: [testbed-node-5] 2025-05-14 02:54:31.702566 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-05-14 02:54:31.702570 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-05-14 02:54:31.702574 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-05-14 02:54:31.702578 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-05-14 02:54:31.702582 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-05-14 02:54:31.702586 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-05-14 02:54:31.702590 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:54:31.702595 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.702599 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-05-14 02:54:31.702613 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-05-14 02:54:31.702617 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-05-14 02:54:31.702621 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-05-14 02:54:31.702625 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-05-14 02:54:31.702629 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-05-14 02:54:31.702633 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.702637 | orchestrator | 2025-05-14 02:54:31.702641 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-05-14 02:54:31.702645 | orchestrator | 2025-05-14 02:54:31.702649 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-05-14 02:54:31.702653 | orchestrator | Wednesday 14 May 2025 02:54:28 +0000 (0:00:01.378) 0:08:31.318 ********* 2025-05-14 02:54:31.702657 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-05-14 02:54:31.702672 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-05-14 02:54:31.702676 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:54:31.702680 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-05-14 02:54:31.702684 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-05-14 02:54:31.702688 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.702692 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-05-14 02:54:31.702696 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-05-14 02:54:31.702700 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.702704 | orchestrator | 2025-05-14 02:54:31.702708 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-05-14 02:54:31.702712 | orchestrator | 2025-05-14 02:54:31.702716 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-05-14 02:54:31.702720 | orchestrator | Wednesday 14 May 2025 02:54:29 +0000 (0:00:00.804) 0:08:32.123 ********* 2025-05-14 02:54:31.702724 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:54:31.702728 | orchestrator | 2025-05-14 02:54:31.702732 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-05-14 02:54:31.702736 | orchestrator | 2025-05-14 02:54:31.702740 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-05-14 02:54:31.702744 | orchestrator | Wednesday 14 May 2025 02:54:30 +0000 (0:00:00.934) 0:08:33.057 ********* 2025-05-14 02:54:31.702748 | orchestrator | skipping: [testbed-node-0] 2025-05-14 02:54:31.702752 | orchestrator | skipping: [testbed-node-1] 2025-05-14 02:54:31.702756 | orchestrator | skipping: [testbed-node-2] 2025-05-14 02:54:31.702764 | orchestrator | 2025-05-14 02:54:31.702768 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-14 02:54:31.702772 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-14 02:54:31.702779 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-05-14 02:54:31.702784 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-05-14 02:54:31.702788 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-05-14 02:54:31.702792 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-05-14 02:54:31.702796 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-05-14 02:54:31.702800 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-05-14 02:54:31.702804 | orchestrator | 2025-05-14 02:54:31.702808 | orchestrator | 2025-05-14 02:54:31.702812 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-14 02:54:31.702816 | orchestrator | Wednesday 14 May 2025 02:54:30 +0000 (0:00:00.551) 0:08:33.609 ********* 2025-05-14 02:54:31.702820 | orchestrator | =============================================================================== 2025-05-14 02:54:31.702825 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 29.31s 2025-05-14 02:54:31.702829 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 26.93s 2025-05-14 02:54:31.702833 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 23.05s 2025-05-14 02:54:31.702837 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 22.61s 2025-05-14 02:54:31.702840 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.31s 2025-05-14 02:54:31.702844 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 20.78s 2025-05-14 02:54:31.702848 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 17.23s 2025-05-14 02:54:31.702852 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 16.43s 2025-05-14 02:54:31.702856 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 15.98s 2025-05-14 02:54:31.702860 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.27s 2025-05-14 02:54:31.702864 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.07s 2025-05-14 02:54:31.702871 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.33s 2025-05-14 02:54:31.702875 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.79s 2025-05-14 02:54:31.702879 | orchestrator | nova-cell : Copying files for nova-ssh --------------------------------- 11.10s 2025-05-14 02:54:31.702883 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.91s 2025-05-14 02:54:31.702887 | orchestrator | nova-cell : Create cell ------------------------------------------------ 10.34s 2025-05-14 02:54:31.702891 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 9.52s 2025-05-14 02:54:31.702895 | orchestrator | nova-cell : Discover nova hosts ----------------------------------------- 9.27s 2025-05-14 02:54:31.702899 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 9.27s 2025-05-14 02:54:31.702903 | orchestrator | nova-cell : Copying over libvirt SASL configuration --------------------- 8.31s 2025-05-14 02:54:34.736551 | orchestrator | 2025-05-14 02:54:34 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:54:34.736666 | orchestrator | 2025-05-14 02:54:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:54:37.791445 | orchestrator | 2025-05-14 02:54:37 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:54:37.791563 | orchestrator | 2025-05-14 02:54:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:54:40.846549 | orchestrator | 2025-05-14 02:54:40 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:54:40.846655 | orchestrator | 2025-05-14 02:54:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:54:43.903234 | orchestrator | 2025-05-14 02:54:43 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:54:43.903344 | orchestrator | 2025-05-14 02:54:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:54:46.950730 | orchestrator | 2025-05-14 02:54:46 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:54:46.950891 | orchestrator | 2025-05-14 02:54:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:54:50.009310 | orchestrator | 2025-05-14 02:54:50 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:54:50.009478 | orchestrator | 2025-05-14 02:54:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:54:53.062284 | orchestrator | 2025-05-14 02:54:53 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:54:53.062421 | orchestrator | 2025-05-14 02:54:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:54:56.104363 | orchestrator | 2025-05-14 02:54:56 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:54:56.104506 | orchestrator | 2025-05-14 02:54:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:54:59.147724 | orchestrator | 2025-05-14 02:54:59 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:54:59.147826 | orchestrator | 2025-05-14 02:54:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:55:02.194403 | orchestrator | 2025-05-14 02:55:02 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:55:02.194541 | orchestrator | 2025-05-14 02:55:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:55:05.234467 | orchestrator | 2025-05-14 02:55:05 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:55:05.234564 | orchestrator | 2025-05-14 02:55:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:55:08.282246 | orchestrator | 2025-05-14 02:55:08 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:55:08.282362 | orchestrator | 2025-05-14 02:55:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:55:11.325414 | orchestrator | 2025-05-14 02:55:11 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:55:11.325487 | orchestrator | 2025-05-14 02:55:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:55:14.372366 | orchestrator | 2025-05-14 02:55:14 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:55:14.372445 | orchestrator | 2025-05-14 02:55:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:55:17.410582 | orchestrator | 2025-05-14 02:55:17 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:55:17.410662 | orchestrator | 2025-05-14 02:55:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:55:20.466772 | orchestrator | 2025-05-14 02:55:20 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:55:20.466880 | orchestrator | 2025-05-14 02:55:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:55:23.518808 | orchestrator | 2025-05-14 02:55:23 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:55:23.518878 | orchestrator | 2025-05-14 02:55:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:55:26.576358 | orchestrator | 2025-05-14 02:55:26 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:55:26.576465 | orchestrator | 2025-05-14 02:55:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:55:29.635253 | orchestrator | 2025-05-14 02:55:29 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:55:29.635337 | orchestrator | 2025-05-14 02:55:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:55:32.689032 | orchestrator | 2025-05-14 02:55:32 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:55:32.689139 | orchestrator | 2025-05-14 02:55:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:55:35.738737 | orchestrator | 2025-05-14 02:55:35 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:55:35.738861 | orchestrator | 2025-05-14 02:55:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:55:38.794927 | orchestrator | 2025-05-14 02:55:38 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:55:38.795029 | orchestrator | 2025-05-14 02:55:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:55:41.846892 | orchestrator | 2025-05-14 02:55:41 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:55:41.847076 | orchestrator | 2025-05-14 02:55:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:55:44.894183 | orchestrator | 2025-05-14 02:55:44 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:55:44.894296 | orchestrator | 2025-05-14 02:55:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:55:47.937896 | orchestrator | 2025-05-14 02:55:47 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:55:47.938162 | orchestrator | 2025-05-14 02:55:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:55:50.983916 | orchestrator | 2025-05-14 02:55:50 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:55:50.984018 | orchestrator | 2025-05-14 02:55:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:55:54.042845 | orchestrator | 2025-05-14 02:55:54 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:55:54.043040 | orchestrator | 2025-05-14 02:55:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:55:57.082669 | orchestrator | 2025-05-14 02:55:57 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:55:57.082806 | orchestrator | 2025-05-14 02:55:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:56:00.129665 | orchestrator | 2025-05-14 02:56:00 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:56:00.129758 | orchestrator | 2025-05-14 02:56:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:56:03.168783 | orchestrator | 2025-05-14 02:56:03 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:56:03.168890 | orchestrator | 2025-05-14 02:56:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:56:06.214651 | orchestrator | 2025-05-14 02:56:06 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:56:06.214817 | orchestrator | 2025-05-14 02:56:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:56:09.262451 | orchestrator | 2025-05-14 02:56:09 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:56:09.262554 | orchestrator | 2025-05-14 02:56:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:56:12.321030 | orchestrator | 2025-05-14 02:56:12 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:56:12.321131 | orchestrator | 2025-05-14 02:56:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:56:15.369444 | orchestrator | 2025-05-14 02:56:15 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:56:15.369540 | orchestrator | 2025-05-14 02:56:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:56:18.422765 | orchestrator | 2025-05-14 02:56:18 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:56:18.422882 | orchestrator | 2025-05-14 02:56:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:56:21.481439 | orchestrator | 2025-05-14 02:56:21 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:56:21.481557 | orchestrator | 2025-05-14 02:56:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:56:24.532780 | orchestrator | 2025-05-14 02:56:24 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:56:24.532883 | orchestrator | 2025-05-14 02:56:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:56:27.598313 | orchestrator | 2025-05-14 02:56:27 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:56:27.598424 | orchestrator | 2025-05-14 02:56:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:56:30.648528 | orchestrator | 2025-05-14 02:56:30 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:56:30.648609 | orchestrator | 2025-05-14 02:56:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:56:33.699584 | orchestrator | 2025-05-14 02:56:33 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:56:33.699725 | orchestrator | 2025-05-14 02:56:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:56:36.752772 | orchestrator | 2025-05-14 02:56:36 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:56:36.752983 | orchestrator | 2025-05-14 02:56:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:56:39.803590 | orchestrator | 2025-05-14 02:56:39 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:56:39.803717 | orchestrator | 2025-05-14 02:56:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:56:42.856740 | orchestrator | 2025-05-14 02:56:42 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:56:42.856829 | orchestrator | 2025-05-14 02:56:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:56:45.905187 | orchestrator | 2025-05-14 02:56:45 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:56:45.905315 | orchestrator | 2025-05-14 02:56:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:56:48.948742 | orchestrator | 2025-05-14 02:56:48 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:56:48.948916 | orchestrator | 2025-05-14 02:56:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:56:51.995190 | orchestrator | 2025-05-14 02:56:51 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:56:51.995325 | orchestrator | 2025-05-14 02:56:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:56:55.042751 | orchestrator | 2025-05-14 02:56:55 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:56:55.042869 | orchestrator | 2025-05-14 02:56:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:56:58.086642 | orchestrator | 2025-05-14 02:56:58 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:56:58.086746 | orchestrator | 2025-05-14 02:56:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:57:01.137761 | orchestrator | 2025-05-14 02:57:01 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:57:01.137842 | orchestrator | 2025-05-14 02:57:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:57:04.180748 | orchestrator | 2025-05-14 02:57:04 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:57:04.180854 | orchestrator | 2025-05-14 02:57:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:57:07.229742 | orchestrator | 2025-05-14 02:57:07 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:57:07.229847 | orchestrator | 2025-05-14 02:57:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:57:10.267059 | orchestrator | 2025-05-14 02:57:10 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:57:10.267186 | orchestrator | 2025-05-14 02:57:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:57:13.318832 | orchestrator | 2025-05-14 02:57:13 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:57:13.318967 | orchestrator | 2025-05-14 02:57:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:57:16.373734 | orchestrator | 2025-05-14 02:57:16 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:57:16.373859 | orchestrator | 2025-05-14 02:57:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:57:19.433627 | orchestrator | 2025-05-14 02:57:19 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:57:19.433712 | orchestrator | 2025-05-14 02:57:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:57:22.475185 | orchestrator | 2025-05-14 02:57:22 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:57:22.475293 | orchestrator | 2025-05-14 02:57:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:57:25.526115 | orchestrator | 2025-05-14 02:57:25 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:57:25.526252 | orchestrator | 2025-05-14 02:57:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:57:28.576467 | orchestrator | 2025-05-14 02:57:28 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:57:28.576569 | orchestrator | 2025-05-14 02:57:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:57:31.629783 | orchestrator | 2025-05-14 02:57:31 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:57:31.630910 | orchestrator | 2025-05-14 02:57:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:57:34.685368 | orchestrator | 2025-05-14 02:57:34 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:57:34.685474 | orchestrator | 2025-05-14 02:57:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:57:37.731839 | orchestrator | 2025-05-14 02:57:37 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:57:37.732014 | orchestrator | 2025-05-14 02:57:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:57:40.780538 | orchestrator | 2025-05-14 02:57:40 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:57:40.780639 | orchestrator | 2025-05-14 02:57:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:57:43.827547 | orchestrator | 2025-05-14 02:57:43 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:57:43.827653 | orchestrator | 2025-05-14 02:57:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:57:46.882994 | orchestrator | 2025-05-14 02:57:46 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:57:46.883098 | orchestrator | 2025-05-14 02:57:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:57:49.935561 | orchestrator | 2025-05-14 02:57:49 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:57:49.935658 | orchestrator | 2025-05-14 02:57:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:57:52.989269 | orchestrator | 2025-05-14 02:57:52 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:57:52.989376 | orchestrator | 2025-05-14 02:57:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:57:56.039718 | orchestrator | 2025-05-14 02:57:56 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:57:56.039792 | orchestrator | 2025-05-14 02:57:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:57:59.091167 | orchestrator | 2025-05-14 02:57:59 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:57:59.091301 | orchestrator | 2025-05-14 02:57:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:58:02.136389 | orchestrator | 2025-05-14 02:58:02 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:58:02.136518 | orchestrator | 2025-05-14 02:58:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:58:05.173655 | orchestrator | 2025-05-14 02:58:05 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:58:05.173760 | orchestrator | 2025-05-14 02:58:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:58:08.216007 | orchestrator | 2025-05-14 02:58:08 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:58:08.216150 | orchestrator | 2025-05-14 02:58:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:58:11.268061 | orchestrator | 2025-05-14 02:58:11 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:58:11.268185 | orchestrator | 2025-05-14 02:58:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:58:14.310364 | orchestrator | 2025-05-14 02:58:14 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:58:14.310493 | orchestrator | 2025-05-14 02:58:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:58:17.365343 | orchestrator | 2025-05-14 02:58:17 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:58:17.365489 | orchestrator | 2025-05-14 02:58:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:58:20.412276 | orchestrator | 2025-05-14 02:58:20 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:58:20.412388 | orchestrator | 2025-05-14 02:58:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:58:23.461416 | orchestrator | 2025-05-14 02:58:23 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:58:23.461545 | orchestrator | 2025-05-14 02:58:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:58:26.511513 | orchestrator | 2025-05-14 02:58:26 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:58:26.511662 | orchestrator | 2025-05-14 02:58:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:58:29.564097 | orchestrator | 2025-05-14 02:58:29 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:58:29.564239 | orchestrator | 2025-05-14 02:58:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:58:32.608801 | orchestrator | 2025-05-14 02:58:32 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:58:32.609006 | orchestrator | 2025-05-14 02:58:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:58:35.650373 | orchestrator | 2025-05-14 02:58:35 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:58:35.650478 | orchestrator | 2025-05-14 02:58:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:58:38.698478 | orchestrator | 2025-05-14 02:58:38 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:58:38.698604 | orchestrator | 2025-05-14 02:58:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:58:41.748576 | orchestrator | 2025-05-14 02:58:41 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:58:41.748701 | orchestrator | 2025-05-14 02:58:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:58:44.796095 | orchestrator | 2025-05-14 02:58:44 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:58:44.796190 | orchestrator | 2025-05-14 02:58:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:58:47.837110 | orchestrator | 2025-05-14 02:58:47 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:58:47.837233 | orchestrator | 2025-05-14 02:58:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:58:50.887134 | orchestrator | 2025-05-14 02:58:50 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:58:50.887225 | orchestrator | 2025-05-14 02:58:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:58:53.936692 | orchestrator | 2025-05-14 02:58:53 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:58:53.936793 | orchestrator | 2025-05-14 02:58:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:58:56.987300 | orchestrator | 2025-05-14 02:58:56 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:58:56.987411 | orchestrator | 2025-05-14 02:58:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:59:00.040428 | orchestrator | 2025-05-14 02:59:00 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:59:00.040531 | orchestrator | 2025-05-14 02:59:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:59:03.075131 | orchestrator | 2025-05-14 02:59:03 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:59:03.075238 | orchestrator | 2025-05-14 02:59:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:59:06.122071 | orchestrator | 2025-05-14 02:59:06 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:59:06.122178 | orchestrator | 2025-05-14 02:59:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:59:09.167733 | orchestrator | 2025-05-14 02:59:09 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:59:09.167933 | orchestrator | 2025-05-14 02:59:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:59:12.214767 | orchestrator | 2025-05-14 02:59:12 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:59:12.214971 | orchestrator | 2025-05-14 02:59:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:59:15.261606 | orchestrator | 2025-05-14 02:59:15 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:59:15.261702 | orchestrator | 2025-05-14 02:59:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:59:18.306257 | orchestrator | 2025-05-14 02:59:18 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:59:18.306385 | orchestrator | 2025-05-14 02:59:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:59:21.345113 | orchestrator | 2025-05-14 02:59:21 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:59:21.345218 | orchestrator | 2025-05-14 02:59:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:59:24.395984 | orchestrator | 2025-05-14 02:59:24 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:59:24.396102 | orchestrator | 2025-05-14 02:59:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:59:27.442676 | orchestrator | 2025-05-14 02:59:27 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:59:27.442756 | orchestrator | 2025-05-14 02:59:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:59:30.484035 | orchestrator | 2025-05-14 02:59:30 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:59:30.484129 | orchestrator | 2025-05-14 02:59:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:59:33.536559 | orchestrator | 2025-05-14 02:59:33 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:59:33.536644 | orchestrator | 2025-05-14 02:59:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:59:36.587897 | orchestrator | 2025-05-14 02:59:36 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:59:36.587997 | orchestrator | 2025-05-14 02:59:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:59:39.646236 | orchestrator | 2025-05-14 02:59:39 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:59:39.646372 | orchestrator | 2025-05-14 02:59:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:59:42.689424 | orchestrator | 2025-05-14 02:59:42 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:59:42.689552 | orchestrator | 2025-05-14 02:59:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:59:45.752049 | orchestrator | 2025-05-14 02:59:45 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:59:45.752184 | orchestrator | 2025-05-14 02:59:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:59:48.792751 | orchestrator | 2025-05-14 02:59:48 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:59:48.792902 | orchestrator | 2025-05-14 02:59:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:59:51.841994 | orchestrator | 2025-05-14 02:59:51 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:59:51.842164 | orchestrator | 2025-05-14 02:59:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:59:54.894291 | orchestrator | 2025-05-14 02:59:54 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:59:54.894396 | orchestrator | 2025-05-14 02:59:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 02:59:57.947691 | orchestrator | 2025-05-14 02:59:57 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 02:59:57.947762 | orchestrator | 2025-05-14 02:59:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:00:00.999282 | orchestrator | 2025-05-14 03:00:00 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:00:00.999396 | orchestrator | 2025-05-14 03:00:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:00:04.051573 | orchestrator | 2025-05-14 03:00:04 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:00:04.051682 | orchestrator | 2025-05-14 03:00:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:00:07.093556 | orchestrator | 2025-05-14 03:00:07 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:00:07.093660 | orchestrator | 2025-05-14 03:00:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:00:10.140318 | orchestrator | 2025-05-14 03:00:10 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:00:10.140442 | orchestrator | 2025-05-14 03:00:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:00:13.182242 | orchestrator | 2025-05-14 03:00:13 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:00:13.182355 | orchestrator | 2025-05-14 03:00:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:00:16.226364 | orchestrator | 2025-05-14 03:00:16 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:00:16.226473 | orchestrator | 2025-05-14 03:00:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:00:19.263494 | orchestrator | 2025-05-14 03:00:19 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:00:19.263595 | orchestrator | 2025-05-14 03:00:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:00:22.309873 | orchestrator | 2025-05-14 03:00:22 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:00:22.309973 | orchestrator | 2025-05-14 03:00:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:00:25.353500 | orchestrator | 2025-05-14 03:00:25 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:00:25.353604 | orchestrator | 2025-05-14 03:00:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:00:28.404902 | orchestrator | 2025-05-14 03:00:28 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:00:28.405012 | orchestrator | 2025-05-14 03:00:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:00:31.465062 | orchestrator | 2025-05-14 03:00:31 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:00:31.465111 | orchestrator | 2025-05-14 03:00:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:00:34.509531 | orchestrator | 2025-05-14 03:00:34 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:00:34.509629 | orchestrator | 2025-05-14 03:00:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:00:37.567180 | orchestrator | 2025-05-14 03:00:37 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:00:37.567308 | orchestrator | 2025-05-14 03:00:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:00:40.615269 | orchestrator | 2025-05-14 03:00:40 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:00:40.615346 | orchestrator | 2025-05-14 03:00:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:00:43.663960 | orchestrator | 2025-05-14 03:00:43 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:00:43.664049 | orchestrator | 2025-05-14 03:00:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:00:46.710029 | orchestrator | 2025-05-14 03:00:46 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:00:46.710166 | orchestrator | 2025-05-14 03:00:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:00:49.760507 | orchestrator | 2025-05-14 03:00:49 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:00:49.760635 | orchestrator | 2025-05-14 03:00:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:00:52.808725 | orchestrator | 2025-05-14 03:00:52 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:00:52.808959 | orchestrator | 2025-05-14 03:00:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:00:55.865110 | orchestrator | 2025-05-14 03:00:55 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:00:55.865211 | orchestrator | 2025-05-14 03:00:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:00:58.915198 | orchestrator | 2025-05-14 03:00:58 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:00:58.915324 | orchestrator | 2025-05-14 03:00:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:01:01.959298 | orchestrator | 2025-05-14 03:01:01 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:01:01.959423 | orchestrator | 2025-05-14 03:01:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:01:05.016119 | orchestrator | 2025-05-14 03:01:05 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:01:05.016230 | orchestrator | 2025-05-14 03:01:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:01:08.061721 | orchestrator | 2025-05-14 03:01:08 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:01:08.061886 | orchestrator | 2025-05-14 03:01:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:01:11.109744 | orchestrator | 2025-05-14 03:01:11 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:01:11.109859 | orchestrator | 2025-05-14 03:01:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:01:14.154121 | orchestrator | 2025-05-14 03:01:14 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:01:14.154257 | orchestrator | 2025-05-14 03:01:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:01:17.193718 | orchestrator | 2025-05-14 03:01:17 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:01:17.193906 | orchestrator | 2025-05-14 03:01:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:01:20.242313 | orchestrator | 2025-05-14 03:01:20 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:01:20.242431 | orchestrator | 2025-05-14 03:01:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:01:23.293659 | orchestrator | 2025-05-14 03:01:23 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:01:23.293785 | orchestrator | 2025-05-14 03:01:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:01:26.347292 | orchestrator | 2025-05-14 03:01:26 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:01:26.347505 | orchestrator | 2025-05-14 03:01:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:01:29.404526 | orchestrator | 2025-05-14 03:01:29 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:01:29.404633 | orchestrator | 2025-05-14 03:01:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:01:32.462793 | orchestrator | 2025-05-14 03:01:32 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:01:32.462904 | orchestrator | 2025-05-14 03:01:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:01:35.515484 | orchestrator | 2025-05-14 03:01:35 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:01:35.515555 | orchestrator | 2025-05-14 03:01:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:01:38.563163 | orchestrator | 2025-05-14 03:01:38 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:01:38.563244 | orchestrator | 2025-05-14 03:01:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:01:41.608913 | orchestrator | 2025-05-14 03:01:41 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:01:41.609897 | orchestrator | 2025-05-14 03:01:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:01:44.658913 | orchestrator | 2025-05-14 03:01:44 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:01:44.659043 | orchestrator | 2025-05-14 03:01:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:01:47.707858 | orchestrator | 2025-05-14 03:01:47 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:01:47.707969 | orchestrator | 2025-05-14 03:01:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:01:50.758396 | orchestrator | 2025-05-14 03:01:50 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:01:50.758501 | orchestrator | 2025-05-14 03:01:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:01:53.805458 | orchestrator | 2025-05-14 03:01:53 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:01:53.805563 | orchestrator | 2025-05-14 03:01:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:01:56.853752 | orchestrator | 2025-05-14 03:01:56 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:01:56.853938 | orchestrator | 2025-05-14 03:01:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:01:59.903104 | orchestrator | 2025-05-14 03:01:59 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:01:59.903210 | orchestrator | 2025-05-14 03:01:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:02:02.956022 | orchestrator | 2025-05-14 03:02:02 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:02:02.956100 | orchestrator | 2025-05-14 03:02:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:02:06.005606 | orchestrator | 2025-05-14 03:02:06 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:02:06.005743 | orchestrator | 2025-05-14 03:02:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:02:09.055153 | orchestrator | 2025-05-14 03:02:09 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:02:09.055261 | orchestrator | 2025-05-14 03:02:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:02:12.102172 | orchestrator | 2025-05-14 03:02:12 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:02:12.102279 | orchestrator | 2025-05-14 03:02:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:02:15.145520 | orchestrator | 2025-05-14 03:02:15 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:02:15.145660 | orchestrator | 2025-05-14 03:02:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:02:18.182561 | orchestrator | 2025-05-14 03:02:18 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:02:18.182651 | orchestrator | 2025-05-14 03:02:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:02:21.224444 | orchestrator | 2025-05-14 03:02:21 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:02:21.224552 | orchestrator | 2025-05-14 03:02:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:02:24.273598 | orchestrator | 2025-05-14 03:02:24 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:02:24.273687 | orchestrator | 2025-05-14 03:02:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:02:27.323967 | orchestrator | 2025-05-14 03:02:27 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:02:27.324106 | orchestrator | 2025-05-14 03:02:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:02:30.376629 | orchestrator | 2025-05-14 03:02:30 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:02:30.376754 | orchestrator | 2025-05-14 03:02:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:02:33.419360 | orchestrator | 2025-05-14 03:02:33 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:02:33.419470 | orchestrator | 2025-05-14 03:02:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:02:36.468424 | orchestrator | 2025-05-14 03:02:36 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:02:36.468573 | orchestrator | 2025-05-14 03:02:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:02:39.513632 | orchestrator | 2025-05-14 03:02:39 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:02:39.513722 | orchestrator | 2025-05-14 03:02:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:02:42.566318 | orchestrator | 2025-05-14 03:02:42 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:02:42.566429 | orchestrator | 2025-05-14 03:02:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:02:45.615482 | orchestrator | 2025-05-14 03:02:45 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:02:45.615576 | orchestrator | 2025-05-14 03:02:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:02:48.657787 | orchestrator | 2025-05-14 03:02:48 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:02:48.657937 | orchestrator | 2025-05-14 03:02:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:02:51.710348 | orchestrator | 2025-05-14 03:02:51 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:02:51.710449 | orchestrator | 2025-05-14 03:02:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:02:54.755440 | orchestrator | 2025-05-14 03:02:54 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:02:54.755549 | orchestrator | 2025-05-14 03:02:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:02:57.797450 | orchestrator | 2025-05-14 03:02:57 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:02:57.797560 | orchestrator | 2025-05-14 03:02:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:03:00.855227 | orchestrator | 2025-05-14 03:03:00 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:03:00.855358 | orchestrator | 2025-05-14 03:03:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:03:03.907244 | orchestrator | 2025-05-14 03:03:03 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:03:03.907357 | orchestrator | 2025-05-14 03:03:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:03:06.962180 | orchestrator | 2025-05-14 03:03:06 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:03:06.962300 | orchestrator | 2025-05-14 03:03:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:03:10.015563 | orchestrator | 2025-05-14 03:03:10 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:03:10.015664 | orchestrator | 2025-05-14 03:03:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:03:13.061142 | orchestrator | 2025-05-14 03:03:13 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:03:13.061278 | orchestrator | 2025-05-14 03:03:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:03:16.105371 | orchestrator | 2025-05-14 03:03:16 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:03:16.105492 | orchestrator | 2025-05-14 03:03:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:03:19.145365 | orchestrator | 2025-05-14 03:03:19 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:03:19.145471 | orchestrator | 2025-05-14 03:03:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:03:22.195758 | orchestrator | 2025-05-14 03:03:22 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:03:22.195917 | orchestrator | 2025-05-14 03:03:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:03:25.234262 | orchestrator | 2025-05-14 03:03:25 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:03:25.237018 | orchestrator | 2025-05-14 03:03:25 | INFO  | Task 1a6e30f9-afc7-417c-ba71-ec16aa88a750 is in state STARTED 2025-05-14 03:03:25.237063 | orchestrator | 2025-05-14 03:03:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:03:28.293788 | orchestrator | 2025-05-14 03:03:28 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:03:28.295650 | orchestrator | 2025-05-14 03:03:28 | INFO  | Task 1a6e30f9-afc7-417c-ba71-ec16aa88a750 is in state STARTED 2025-05-14 03:03:28.296174 | orchestrator | 2025-05-14 03:03:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:03:31.355571 | orchestrator | 2025-05-14 03:03:31 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:03:31.356776 | orchestrator | 2025-05-14 03:03:31 | INFO  | Task 1a6e30f9-afc7-417c-ba71-ec16aa88a750 is in state STARTED 2025-05-14 03:03:31.356954 | orchestrator | 2025-05-14 03:03:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:03:34.422388 | orchestrator | 2025-05-14 03:03:34 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:03:34.424934 | orchestrator | 2025-05-14 03:03:34 | INFO  | Task 1a6e30f9-afc7-417c-ba71-ec16aa88a750 is in state STARTED 2025-05-14 03:03:34.425030 | orchestrator | 2025-05-14 03:03:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:03:37.471100 | orchestrator | 2025-05-14 03:03:37 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:03:37.472110 | orchestrator | 2025-05-14 03:03:37 | INFO  | Task 1a6e30f9-afc7-417c-ba71-ec16aa88a750 is in state SUCCESS 2025-05-14 03:03:37.472270 | orchestrator | 2025-05-14 03:03:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:03:40.519982 | orchestrator | 2025-05-14 03:03:40 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:03:40.520063 | orchestrator | 2025-05-14 03:03:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:03:43.570736 | orchestrator | 2025-05-14 03:03:43 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:03:43.570878 | orchestrator | 2025-05-14 03:03:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:03:46.622547 | orchestrator | 2025-05-14 03:03:46 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:03:46.622671 | orchestrator | 2025-05-14 03:03:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:03:49.670558 | orchestrator | 2025-05-14 03:03:49 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:03:49.671455 | orchestrator | 2025-05-14 03:03:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:03:52.718749 | orchestrator | 2025-05-14 03:03:52 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:03:52.718922 | orchestrator | 2025-05-14 03:03:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:03:55.767296 | orchestrator | 2025-05-14 03:03:55 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:03:55.767396 | orchestrator | 2025-05-14 03:03:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:03:58.810877 | orchestrator | 2025-05-14 03:03:58 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:03:58.810979 | orchestrator | 2025-05-14 03:03:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:04:01.862292 | orchestrator | 2025-05-14 03:04:01 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:04:01.862397 | orchestrator | 2025-05-14 03:04:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:04:04.924253 | orchestrator | 2025-05-14 03:04:04 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:04:04.924353 | orchestrator | 2025-05-14 03:04:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:04:07.975642 | orchestrator | 2025-05-14 03:04:07 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:04:07.975750 | orchestrator | 2025-05-14 03:04:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:04:11.041246 | orchestrator | 2025-05-14 03:04:11 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:04:11.041373 | orchestrator | 2025-05-14 03:04:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:04:14.089020 | orchestrator | 2025-05-14 03:04:14 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:04:14.089113 | orchestrator | 2025-05-14 03:04:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:04:17.133663 | orchestrator | 2025-05-14 03:04:17 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:04:17.133879 | orchestrator | 2025-05-14 03:04:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:04:20.173935 | orchestrator | 2025-05-14 03:04:20 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:04:20.174105 | orchestrator | 2025-05-14 03:04:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:04:23.211339 | orchestrator | 2025-05-14 03:04:23 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:04:23.211468 | orchestrator | 2025-05-14 03:04:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:04:26.252090 | orchestrator | 2025-05-14 03:04:26 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:04:26.252196 | orchestrator | 2025-05-14 03:04:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:04:29.293404 | orchestrator | 2025-05-14 03:04:29 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:04:29.293482 | orchestrator | 2025-05-14 03:04:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:04:32.345054 | orchestrator | 2025-05-14 03:04:32 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:04:32.345163 | orchestrator | 2025-05-14 03:04:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:04:35.389918 | orchestrator | 2025-05-14 03:04:35 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:04:35.390129 | orchestrator | 2025-05-14 03:04:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:04:38.438318 | orchestrator | 2025-05-14 03:04:38 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:04:38.438424 | orchestrator | 2025-05-14 03:04:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:04:41.485497 | orchestrator | 2025-05-14 03:04:41 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:04:41.485598 | orchestrator | 2025-05-14 03:04:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:04:44.537192 | orchestrator | 2025-05-14 03:04:44 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:04:44.537299 | orchestrator | 2025-05-14 03:04:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:04:47.582362 | orchestrator | 2025-05-14 03:04:47 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:04:47.582467 | orchestrator | 2025-05-14 03:04:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:04:50.628456 | orchestrator | 2025-05-14 03:04:50 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:04:50.629344 | orchestrator | 2025-05-14 03:04:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:04:53.682371 | orchestrator | 2025-05-14 03:04:53 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:04:53.682443 | orchestrator | 2025-05-14 03:04:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:04:56.730198 | orchestrator | 2025-05-14 03:04:56 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:04:56.730308 | orchestrator | 2025-05-14 03:04:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:04:59.784177 | orchestrator | 2025-05-14 03:04:59 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:04:59.784284 | orchestrator | 2025-05-14 03:04:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:05:02.833659 | orchestrator | 2025-05-14 03:05:02 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:05:02.833760 | orchestrator | 2025-05-14 03:05:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:05:05.884265 | orchestrator | 2025-05-14 03:05:05 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:05:05.884414 | orchestrator | 2025-05-14 03:05:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:05:08.929716 | orchestrator | 2025-05-14 03:05:08 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:05:08.929926 | orchestrator | 2025-05-14 03:05:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:05:11.976759 | orchestrator | 2025-05-14 03:05:11 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:05:11.976969 | orchestrator | 2025-05-14 03:05:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:05:15.028906 | orchestrator | 2025-05-14 03:05:15 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:05:15.029017 | orchestrator | 2025-05-14 03:05:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:05:18.072749 | orchestrator | 2025-05-14 03:05:18 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:05:18.072919 | orchestrator | 2025-05-14 03:05:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:05:21.111223 | orchestrator | 2025-05-14 03:05:21 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:05:21.111332 | orchestrator | 2025-05-14 03:05:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:05:24.145493 | orchestrator | 2025-05-14 03:05:24 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:05:24.145615 | orchestrator | 2025-05-14 03:05:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:05:27.190399 | orchestrator | 2025-05-14 03:05:27 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:05:27.190508 | orchestrator | 2025-05-14 03:05:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:05:30.240571 | orchestrator | 2025-05-14 03:05:30 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:05:30.240675 | orchestrator | 2025-05-14 03:05:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:05:33.288812 | orchestrator | 2025-05-14 03:05:33 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:05:33.288908 | orchestrator | 2025-05-14 03:05:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:05:36.338455 | orchestrator | 2025-05-14 03:05:36 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:05:36.338542 | orchestrator | 2025-05-14 03:05:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:05:39.383721 | orchestrator | 2025-05-14 03:05:39 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:05:39.383877 | orchestrator | 2025-05-14 03:05:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:05:42.426588 | orchestrator | 2025-05-14 03:05:42 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:05:42.426686 | orchestrator | 2025-05-14 03:05:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:05:45.482380 | orchestrator | 2025-05-14 03:05:45 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:05:45.482499 | orchestrator | 2025-05-14 03:05:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:05:48.520317 | orchestrator | 2025-05-14 03:05:48 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:05:48.520454 | orchestrator | 2025-05-14 03:05:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:05:51.568935 | orchestrator | 2025-05-14 03:05:51 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:05:51.569059 | orchestrator | 2025-05-14 03:05:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:05:54.619853 | orchestrator | 2025-05-14 03:05:54 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:05:54.619958 | orchestrator | 2025-05-14 03:05:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:05:57.665298 | orchestrator | 2025-05-14 03:05:57 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:05:57.665407 | orchestrator | 2025-05-14 03:05:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:06:00.717602 | orchestrator | 2025-05-14 03:06:00 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:06:00.717735 | orchestrator | 2025-05-14 03:06:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:06:03.769460 | orchestrator | 2025-05-14 03:06:03 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:06:03.769567 | orchestrator | 2025-05-14 03:06:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:06:06.818404 | orchestrator | 2025-05-14 03:06:06 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:06:06.818512 | orchestrator | 2025-05-14 03:06:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:06:09.863432 | orchestrator | 2025-05-14 03:06:09 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:06:09.863522 | orchestrator | 2025-05-14 03:06:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:06:12.916825 | orchestrator | 2025-05-14 03:06:12 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:06:12.916928 | orchestrator | 2025-05-14 03:06:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:06:15.968509 | orchestrator | 2025-05-14 03:06:15 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:06:15.968614 | orchestrator | 2025-05-14 03:06:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:06:19.026637 | orchestrator | 2025-05-14 03:06:19 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:06:19.026743 | orchestrator | 2025-05-14 03:06:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:06:22.076412 | orchestrator | 2025-05-14 03:06:22 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:06:22.076542 | orchestrator | 2025-05-14 03:06:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:06:25.119120 | orchestrator | 2025-05-14 03:06:25 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:06:25.119240 | orchestrator | 2025-05-14 03:06:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:06:28.161521 | orchestrator | 2025-05-14 03:06:28 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:06:28.161625 | orchestrator | 2025-05-14 03:06:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:06:31.208519 | orchestrator | 2025-05-14 03:06:31 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:06:31.208638 | orchestrator | 2025-05-14 03:06:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:06:34.250222 | orchestrator | 2025-05-14 03:06:34 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:06:34.250336 | orchestrator | 2025-05-14 03:06:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:06:37.300526 | orchestrator | 2025-05-14 03:06:37 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:06:37.300655 | orchestrator | 2025-05-14 03:06:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:06:40.356403 | orchestrator | 2025-05-14 03:06:40 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:06:40.356515 | orchestrator | 2025-05-14 03:06:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:06:43.413118 | orchestrator | 2025-05-14 03:06:43 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:06:43.413226 | orchestrator | 2025-05-14 03:06:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:06:46.463538 | orchestrator | 2025-05-14 03:06:46 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:06:46.463670 | orchestrator | 2025-05-14 03:06:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:06:49.515381 | orchestrator | 2025-05-14 03:06:49 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:06:49.515486 | orchestrator | 2025-05-14 03:06:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:06:52.573289 | orchestrator | 2025-05-14 03:06:52 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:06:52.573386 | orchestrator | 2025-05-14 03:06:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:06:55.630082 | orchestrator | 2025-05-14 03:06:55 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:06:55.630217 | orchestrator | 2025-05-14 03:06:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:06:58.678245 | orchestrator | 2025-05-14 03:06:58 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:06:58.678345 | orchestrator | 2025-05-14 03:06:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:07:01.726425 | orchestrator | 2025-05-14 03:07:01 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:07:01.726525 | orchestrator | 2025-05-14 03:07:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:07:04.771488 | orchestrator | 2025-05-14 03:07:04 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:07:04.771579 | orchestrator | 2025-05-14 03:07:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:07:07.823712 | orchestrator | 2025-05-14 03:07:07 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:07:07.823980 | orchestrator | 2025-05-14 03:07:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:07:10.871242 | orchestrator | 2025-05-14 03:07:10 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:07:10.871348 | orchestrator | 2025-05-14 03:07:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:07:13.917409 | orchestrator | 2025-05-14 03:07:13 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:07:13.917511 | orchestrator | 2025-05-14 03:07:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:07:16.967493 | orchestrator | 2025-05-14 03:07:16 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:07:16.967602 | orchestrator | 2025-05-14 03:07:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:07:20.025417 | orchestrator | 2025-05-14 03:07:20 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:07:20.025558 | orchestrator | 2025-05-14 03:07:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:07:23.076544 | orchestrator | 2025-05-14 03:07:23 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:07:23.076618 | orchestrator | 2025-05-14 03:07:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:07:26.117896 | orchestrator | 2025-05-14 03:07:26 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:07:26.117973 | orchestrator | 2025-05-14 03:07:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:07:29.162299 | orchestrator | 2025-05-14 03:07:29 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:07:29.162410 | orchestrator | 2025-05-14 03:07:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:07:32.209158 | orchestrator | 2025-05-14 03:07:32 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:07:32.209302 | orchestrator | 2025-05-14 03:07:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:07:35.256926 | orchestrator | 2025-05-14 03:07:35 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:07:35.257026 | orchestrator | 2025-05-14 03:07:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:07:38.304078 | orchestrator | 2025-05-14 03:07:38 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:07:38.304285 | orchestrator | 2025-05-14 03:07:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:07:41.353593 | orchestrator | 2025-05-14 03:07:41 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:07:41.353699 | orchestrator | 2025-05-14 03:07:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:07:44.408645 | orchestrator | 2025-05-14 03:07:44 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:07:44.408819 | orchestrator | 2025-05-14 03:07:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:07:47.456467 | orchestrator | 2025-05-14 03:07:47 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:07:47.456577 | orchestrator | 2025-05-14 03:07:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:07:50.506660 | orchestrator | 2025-05-14 03:07:50 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:07:50.506865 | orchestrator | 2025-05-14 03:07:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:07:53.565065 | orchestrator | 2025-05-14 03:07:53 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:07:53.565156 | orchestrator | 2025-05-14 03:07:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:07:56.613765 | orchestrator | 2025-05-14 03:07:56 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:07:56.613962 | orchestrator | 2025-05-14 03:07:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:07:59.660178 | orchestrator | 2025-05-14 03:07:59 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:07:59.660250 | orchestrator | 2025-05-14 03:07:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:08:02.707304 | orchestrator | 2025-05-14 03:08:02 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:08:02.707413 | orchestrator | 2025-05-14 03:08:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:08:05.756452 | orchestrator | 2025-05-14 03:08:05 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:08:05.756552 | orchestrator | 2025-05-14 03:08:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:08:08.813571 | orchestrator | 2025-05-14 03:08:08 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:08:08.813672 | orchestrator | 2025-05-14 03:08:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:08:11.858301 | orchestrator | 2025-05-14 03:08:11 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:08:11.858421 | orchestrator | 2025-05-14 03:08:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:08:14.913032 | orchestrator | 2025-05-14 03:08:14 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:08:14.913141 | orchestrator | 2025-05-14 03:08:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:08:17.961160 | orchestrator | 2025-05-14 03:08:17 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:08:17.961287 | orchestrator | 2025-05-14 03:08:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:08:21.011528 | orchestrator | 2025-05-14 03:08:21 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:08:21.011654 | orchestrator | 2025-05-14 03:08:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:08:24.055052 | orchestrator | 2025-05-14 03:08:24 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:08:24.055136 | orchestrator | 2025-05-14 03:08:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:08:27.099163 | orchestrator | 2025-05-14 03:08:27 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:08:27.099301 | orchestrator | 2025-05-14 03:08:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:08:30.138774 | orchestrator | 2025-05-14 03:08:30 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:08:30.138923 | orchestrator | 2025-05-14 03:08:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:08:33.191845 | orchestrator | 2025-05-14 03:08:33 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:08:33.191937 | orchestrator | 2025-05-14 03:08:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:08:36.231588 | orchestrator | 2025-05-14 03:08:36 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:08:36.231657 | orchestrator | 2025-05-14 03:08:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:08:39.279245 | orchestrator | 2025-05-14 03:08:39 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:08:39.279353 | orchestrator | 2025-05-14 03:08:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:08:42.323484 | orchestrator | 2025-05-14 03:08:42 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:08:42.323631 | orchestrator | 2025-05-14 03:08:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:08:45.370370 | orchestrator | 2025-05-14 03:08:45 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:08:45.370454 | orchestrator | 2025-05-14 03:08:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:08:48.417516 | orchestrator | 2025-05-14 03:08:48 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:08:48.417627 | orchestrator | 2025-05-14 03:08:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:08:51.464921 | orchestrator | 2025-05-14 03:08:51 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:08:51.465060 | orchestrator | 2025-05-14 03:08:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:08:54.512192 | orchestrator | 2025-05-14 03:08:54 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:08:54.512292 | orchestrator | 2025-05-14 03:08:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:08:57.562005 | orchestrator | 2025-05-14 03:08:57 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:08:57.562224 | orchestrator | 2025-05-14 03:08:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:09:00.607368 | orchestrator | 2025-05-14 03:09:00 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:09:00.607491 | orchestrator | 2025-05-14 03:09:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:09:03.648641 | orchestrator | 2025-05-14 03:09:03 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:09:03.648755 | orchestrator | 2025-05-14 03:09:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:09:06.701246 | orchestrator | 2025-05-14 03:09:06 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:09:06.701340 | orchestrator | 2025-05-14 03:09:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:09:09.752003 | orchestrator | 2025-05-14 03:09:09 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:09:09.752119 | orchestrator | 2025-05-14 03:09:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:09:12.804593 | orchestrator | 2025-05-14 03:09:12 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:09:12.804715 | orchestrator | 2025-05-14 03:09:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:09:15.846731 | orchestrator | 2025-05-14 03:09:15 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:09:15.846898 | orchestrator | 2025-05-14 03:09:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:09:18.891787 | orchestrator | 2025-05-14 03:09:18 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:09:18.891975 | orchestrator | 2025-05-14 03:09:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:09:21.941039 | orchestrator | 2025-05-14 03:09:21 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:09:21.941144 | orchestrator | 2025-05-14 03:09:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:09:24.987756 | orchestrator | 2025-05-14 03:09:24 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:09:24.987917 | orchestrator | 2025-05-14 03:09:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:09:28.039327 | orchestrator | 2025-05-14 03:09:28 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:09:28.039433 | orchestrator | 2025-05-14 03:09:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:09:31.085707 | orchestrator | 2025-05-14 03:09:31 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:09:31.085871 | orchestrator | 2025-05-14 03:09:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:09:34.131421 | orchestrator | 2025-05-14 03:09:34 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:09:34.131512 | orchestrator | 2025-05-14 03:09:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:09:37.180461 | orchestrator | 2025-05-14 03:09:37 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:09:37.180592 | orchestrator | 2025-05-14 03:09:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:09:40.223153 | orchestrator | 2025-05-14 03:09:40 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:09:40.223320 | orchestrator | 2025-05-14 03:09:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:09:43.268242 | orchestrator | 2025-05-14 03:09:43 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:09:43.268349 | orchestrator | 2025-05-14 03:09:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:09:46.318118 | orchestrator | 2025-05-14 03:09:46 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:09:46.318225 | orchestrator | 2025-05-14 03:09:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:09:49.362287 | orchestrator | 2025-05-14 03:09:49 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:09:49.362398 | orchestrator | 2025-05-14 03:09:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:09:52.408169 | orchestrator | 2025-05-14 03:09:52 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:09:52.408276 | orchestrator | 2025-05-14 03:09:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:09:55.457407 | orchestrator | 2025-05-14 03:09:55 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:09:55.457497 | orchestrator | 2025-05-14 03:09:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:09:58.509261 | orchestrator | 2025-05-14 03:09:58 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:09:58.509366 | orchestrator | 2025-05-14 03:09:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:10:01.561937 | orchestrator | 2025-05-14 03:10:01 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:10:01.562101 | orchestrator | 2025-05-14 03:10:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:10:04.613001 | orchestrator | 2025-05-14 03:10:04 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:10:04.613125 | orchestrator | 2025-05-14 03:10:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:10:07.659681 | orchestrator | 2025-05-14 03:10:07 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:10:07.659781 | orchestrator | 2025-05-14 03:10:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:10:10.707396 | orchestrator | 2025-05-14 03:10:10 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:10:10.707505 | orchestrator | 2025-05-14 03:10:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:10:13.758783 | orchestrator | 2025-05-14 03:10:13 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:10:13.758967 | orchestrator | 2025-05-14 03:10:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:10:16.807719 | orchestrator | 2025-05-14 03:10:16 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:10:16.807875 | orchestrator | 2025-05-14 03:10:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:10:19.853921 | orchestrator | 2025-05-14 03:10:19 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:10:19.854010 | orchestrator | 2025-05-14 03:10:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:10:22.899408 | orchestrator | 2025-05-14 03:10:22 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:10:22.899581 | orchestrator | 2025-05-14 03:10:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:10:25.950682 | orchestrator | 2025-05-14 03:10:25 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:10:25.950797 | orchestrator | 2025-05-14 03:10:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:10:29.004011 | orchestrator | 2025-05-14 03:10:29 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:10:29.004131 | orchestrator | 2025-05-14 03:10:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:10:32.054573 | orchestrator | 2025-05-14 03:10:32 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:10:32.054675 | orchestrator | 2025-05-14 03:10:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:10:35.105917 | orchestrator | 2025-05-14 03:10:35 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:10:35.106093 | orchestrator | 2025-05-14 03:10:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:10:38.146330 | orchestrator | 2025-05-14 03:10:38 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:10:38.146449 | orchestrator | 2025-05-14 03:10:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:10:41.182758 | orchestrator | 2025-05-14 03:10:41 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:10:41.182904 | orchestrator | 2025-05-14 03:10:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:10:44.225404 | orchestrator | 2025-05-14 03:10:44 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:10:44.225577 | orchestrator | 2025-05-14 03:10:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:10:47.277772 | orchestrator | 2025-05-14 03:10:47 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:10:47.277911 | orchestrator | 2025-05-14 03:10:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:10:50.331832 | orchestrator | 2025-05-14 03:10:50 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:10:50.331986 | orchestrator | 2025-05-14 03:10:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:10:53.378926 | orchestrator | 2025-05-14 03:10:53 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:10:53.379001 | orchestrator | 2025-05-14 03:10:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:10:56.431570 | orchestrator | 2025-05-14 03:10:56 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:10:56.431676 | orchestrator | 2025-05-14 03:10:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:10:59.478427 | orchestrator | 2025-05-14 03:10:59 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:10:59.478535 | orchestrator | 2025-05-14 03:10:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:11:02.520976 | orchestrator | 2025-05-14 03:11:02 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:11:02.521061 | orchestrator | 2025-05-14 03:11:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:11:05.569347 | orchestrator | 2025-05-14 03:11:05 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:11:05.569452 | orchestrator | 2025-05-14 03:11:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:11:08.624339 | orchestrator | 2025-05-14 03:11:08 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:11:08.624471 | orchestrator | 2025-05-14 03:11:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:11:11.676303 | orchestrator | 2025-05-14 03:11:11 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:11:11.676382 | orchestrator | 2025-05-14 03:11:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:11:14.725510 | orchestrator | 2025-05-14 03:11:14 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:11:14.725620 | orchestrator | 2025-05-14 03:11:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:11:17.777645 | orchestrator | 2025-05-14 03:11:17 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:11:17.777746 | orchestrator | 2025-05-14 03:11:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:11:20.827330 | orchestrator | 2025-05-14 03:11:20 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:11:20.827433 | orchestrator | 2025-05-14 03:11:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:11:23.878956 | orchestrator | 2025-05-14 03:11:23 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:11:23.879081 | orchestrator | 2025-05-14 03:11:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:11:26.929362 | orchestrator | 2025-05-14 03:11:26 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:11:26.929498 | orchestrator | 2025-05-14 03:11:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:11:29.986469 | orchestrator | 2025-05-14 03:11:29 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:11:29.986582 | orchestrator | 2025-05-14 03:11:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:11:33.034626 | orchestrator | 2025-05-14 03:11:33 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:11:33.034774 | orchestrator | 2025-05-14 03:11:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:11:36.076307 | orchestrator | 2025-05-14 03:11:36 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:11:36.076410 | orchestrator | 2025-05-14 03:11:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:11:39.114335 | orchestrator | 2025-05-14 03:11:39 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:11:39.114429 | orchestrator | 2025-05-14 03:11:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:11:42.156352 | orchestrator | 2025-05-14 03:11:42 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:11:42.156458 | orchestrator | 2025-05-14 03:11:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:11:45.198323 | orchestrator | 2025-05-14 03:11:45 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:11:45.198425 | orchestrator | 2025-05-14 03:11:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:11:48.241013 | orchestrator | 2025-05-14 03:11:48 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:11:48.241123 | orchestrator | 2025-05-14 03:11:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:11:51.297913 | orchestrator | 2025-05-14 03:11:51 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:11:51.298144 | orchestrator | 2025-05-14 03:11:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:11:54.346986 | orchestrator | 2025-05-14 03:11:54 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:11:54.347123 | orchestrator | 2025-05-14 03:11:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:11:57.391440 | orchestrator | 2025-05-14 03:11:57 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:11:57.391546 | orchestrator | 2025-05-14 03:11:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:12:00.438400 | orchestrator | 2025-05-14 03:12:00 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:12:00.438493 | orchestrator | 2025-05-14 03:12:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:12:03.485958 | orchestrator | 2025-05-14 03:12:03 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:12:03.486231 | orchestrator | 2025-05-14 03:12:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:12:06.532740 | orchestrator | 2025-05-14 03:12:06 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:12:06.532821 | orchestrator | 2025-05-14 03:12:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:12:09.578964 | orchestrator | 2025-05-14 03:12:09 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:12:09.579048 | orchestrator | 2025-05-14 03:12:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:12:12.628145 | orchestrator | 2025-05-14 03:12:12 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:12:12.628245 | orchestrator | 2025-05-14 03:12:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:12:15.676308 | orchestrator | 2025-05-14 03:12:15 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:12:15.676414 | orchestrator | 2025-05-14 03:12:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:12:18.720549 | orchestrator | 2025-05-14 03:12:18 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:12:18.720712 | orchestrator | 2025-05-14 03:12:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:12:21.768616 | orchestrator | 2025-05-14 03:12:21 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:12:21.768717 | orchestrator | 2025-05-14 03:12:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:12:24.815970 | orchestrator | 2025-05-14 03:12:24 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:12:24.816067 | orchestrator | 2025-05-14 03:12:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:12:27.863876 | orchestrator | 2025-05-14 03:12:27 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:12:27.864032 | orchestrator | 2025-05-14 03:12:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:12:30.910985 | orchestrator | 2025-05-14 03:12:30 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:12:30.911093 | orchestrator | 2025-05-14 03:12:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:12:33.960460 | orchestrator | 2025-05-14 03:12:33 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:12:33.960591 | orchestrator | 2025-05-14 03:12:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:12:37.017966 | orchestrator | 2025-05-14 03:12:37 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:12:37.018134 | orchestrator | 2025-05-14 03:12:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:12:40.066555 | orchestrator | 2025-05-14 03:12:40 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:12:40.066693 | orchestrator | 2025-05-14 03:12:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:12:43.107496 | orchestrator | 2025-05-14 03:12:43 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:12:43.107595 | orchestrator | 2025-05-14 03:12:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:12:46.150558 | orchestrator | 2025-05-14 03:12:46 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:12:46.150633 | orchestrator | 2025-05-14 03:12:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:12:49.201228 | orchestrator | 2025-05-14 03:12:49 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:12:49.201454 | orchestrator | 2025-05-14 03:12:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:12:52.251649 | orchestrator | 2025-05-14 03:12:52 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:12:52.251769 | orchestrator | 2025-05-14 03:12:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:12:55.293770 | orchestrator | 2025-05-14 03:12:55 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:12:55.293873 | orchestrator | 2025-05-14 03:12:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:12:58.353996 | orchestrator | 2025-05-14 03:12:58 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:12:58.354175 | orchestrator | 2025-05-14 03:12:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:13:01.399352 | orchestrator | 2025-05-14 03:13:01 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:13:01.399452 | orchestrator | 2025-05-14 03:13:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:13:04.452453 | orchestrator | 2025-05-14 03:13:04 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:13:04.452560 | orchestrator | 2025-05-14 03:13:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:13:07.507679 | orchestrator | 2025-05-14 03:13:07 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:13:07.507792 | orchestrator | 2025-05-14 03:13:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:13:10.560762 | orchestrator | 2025-05-14 03:13:10 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:13:10.560864 | orchestrator | 2025-05-14 03:13:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:13:13.607734 | orchestrator | 2025-05-14 03:13:13 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:13:13.607831 | orchestrator | 2025-05-14 03:13:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:13:16.645232 | orchestrator | 2025-05-14 03:13:16 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:13:16.645340 | orchestrator | 2025-05-14 03:13:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:13:19.696807 | orchestrator | 2025-05-14 03:13:19 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:13:19.696946 | orchestrator | 2025-05-14 03:13:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:13:22.748011 | orchestrator | 2025-05-14 03:13:22 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:13:22.748130 | orchestrator | 2025-05-14 03:13:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:13:25.805726 | orchestrator | 2025-05-14 03:13:25 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:13:25.806352 | orchestrator | 2025-05-14 03:13:25 | INFO  | Task 4e38ebb1-876d-496f-8783-4405e1bcb9f2 is in state STARTED 2025-05-14 03:13:25.806802 | orchestrator | 2025-05-14 03:13:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:13:28.871034 | orchestrator | 2025-05-14 03:13:28 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:13:28.872824 | orchestrator | 2025-05-14 03:13:28 | INFO  | Task 4e38ebb1-876d-496f-8783-4405e1bcb9f2 is in state STARTED 2025-05-14 03:13:28.872917 | orchestrator | 2025-05-14 03:13:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:13:31.928684 | orchestrator | 2025-05-14 03:13:31 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:13:31.930090 | orchestrator | 2025-05-14 03:13:31 | INFO  | Task 4e38ebb1-876d-496f-8783-4405e1bcb9f2 is in state STARTED 2025-05-14 03:13:31.930117 | orchestrator | 2025-05-14 03:13:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:13:34.980614 | orchestrator | 2025-05-14 03:13:34 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:13:34.980752 | orchestrator | 2025-05-14 03:13:34 | INFO  | Task 4e38ebb1-876d-496f-8783-4405e1bcb9f2 is in state SUCCESS 2025-05-14 03:13:34.980781 | orchestrator | 2025-05-14 03:13:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:13:38.035064 | orchestrator | 2025-05-14 03:13:38 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:13:38.035175 | orchestrator | 2025-05-14 03:13:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:13:41.085962 | orchestrator | 2025-05-14 03:13:41 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:13:41.086129 | orchestrator | 2025-05-14 03:13:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:13:44.122267 | orchestrator | 2025-05-14 03:13:44 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:13:44.122402 | orchestrator | 2025-05-14 03:13:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:13:47.165744 | orchestrator | 2025-05-14 03:13:47 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:13:47.165939 | orchestrator | 2025-05-14 03:13:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:13:50.208507 | orchestrator | 2025-05-14 03:13:50 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:13:50.208603 | orchestrator | 2025-05-14 03:13:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:13:53.246685 | orchestrator | 2025-05-14 03:13:53 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:13:53.246791 | orchestrator | 2025-05-14 03:13:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:13:56.286134 | orchestrator | 2025-05-14 03:13:56 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:13:56.286235 | orchestrator | 2025-05-14 03:13:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:13:59.334287 | orchestrator | 2025-05-14 03:13:59 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:13:59.334392 | orchestrator | 2025-05-14 03:13:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:14:02.379764 | orchestrator | 2025-05-14 03:14:02 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:14:02.379944 | orchestrator | 2025-05-14 03:14:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:14:05.426929 | orchestrator | 2025-05-14 03:14:05 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:14:05.427004 | orchestrator | 2025-05-14 03:14:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:14:08.493144 | orchestrator | 2025-05-14 03:14:08 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:14:08.493245 | orchestrator | 2025-05-14 03:14:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:14:11.539141 | orchestrator | 2025-05-14 03:14:11 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:14:11.539218 | orchestrator | 2025-05-14 03:14:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:14:14.584581 | orchestrator | 2025-05-14 03:14:14 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:14:14.584677 | orchestrator | 2025-05-14 03:14:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:14:17.623431 | orchestrator | 2025-05-14 03:14:17 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:14:17.623551 | orchestrator | 2025-05-14 03:14:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:14:20.673064 | orchestrator | 2025-05-14 03:14:20 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:14:20.673174 | orchestrator | 2025-05-14 03:14:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:14:23.722670 | orchestrator | 2025-05-14 03:14:23 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:14:23.722797 | orchestrator | 2025-05-14 03:14:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:14:26.775314 | orchestrator | 2025-05-14 03:14:26 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:14:26.775417 | orchestrator | 2025-05-14 03:14:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:14:29.827081 | orchestrator | 2025-05-14 03:14:29 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:14:30.011564 | orchestrator | 2025-05-14 03:14:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:14:32.875778 | orchestrator | 2025-05-14 03:14:32 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:14:32.875949 | orchestrator | 2025-05-14 03:14:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:14:35.918243 | orchestrator | 2025-05-14 03:14:35 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:14:35.918382 | orchestrator | 2025-05-14 03:14:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:14:38.967665 | orchestrator | 2025-05-14 03:14:38 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:14:38.967738 | orchestrator | 2025-05-14 03:14:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:14:42.023769 | orchestrator | 2025-05-14 03:14:42 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:14:42.023940 | orchestrator | 2025-05-14 03:14:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:14:45.067403 | orchestrator | 2025-05-14 03:14:45 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:14:45.067509 | orchestrator | 2025-05-14 03:14:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:14:48.114351 | orchestrator | 2025-05-14 03:14:48 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:14:48.114459 | orchestrator | 2025-05-14 03:14:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:14:51.148145 | orchestrator | 2025-05-14 03:14:51 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:14:51.148231 | orchestrator | 2025-05-14 03:14:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:14:54.186965 | orchestrator | 2025-05-14 03:14:54 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:14:54.187072 | orchestrator | 2025-05-14 03:14:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:14:57.230950 | orchestrator | 2025-05-14 03:14:57 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:14:57.231055 | orchestrator | 2025-05-14 03:14:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:15:00.282636 | orchestrator | 2025-05-14 03:15:00 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:15:00.282907 | orchestrator | 2025-05-14 03:15:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:15:03.323917 | orchestrator | 2025-05-14 03:15:03 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:15:03.324020 | orchestrator | 2025-05-14 03:15:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:15:06.374670 | orchestrator | 2025-05-14 03:15:06 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:15:06.374779 | orchestrator | 2025-05-14 03:15:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:15:09.417910 | orchestrator | 2025-05-14 03:15:09 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:15:09.418110 | orchestrator | 2025-05-14 03:15:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:15:12.471930 | orchestrator | 2025-05-14 03:15:12 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:15:12.472055 | orchestrator | 2025-05-14 03:15:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:15:15.530184 | orchestrator | 2025-05-14 03:15:15 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:15:15.530267 | orchestrator | 2025-05-14 03:15:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:15:18.580282 | orchestrator | 2025-05-14 03:15:18 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:15:18.580390 | orchestrator | 2025-05-14 03:15:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:15:21.625893 | orchestrator | 2025-05-14 03:15:21 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:15:21.626012 | orchestrator | 2025-05-14 03:15:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:15:24.676913 | orchestrator | 2025-05-14 03:15:24 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:15:24.677024 | orchestrator | 2025-05-14 03:15:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:15:27.726210 | orchestrator | 2025-05-14 03:15:27 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:15:27.726315 | orchestrator | 2025-05-14 03:15:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:15:30.777895 | orchestrator | 2025-05-14 03:15:30 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:15:30.778004 | orchestrator | 2025-05-14 03:15:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:15:33.830308 | orchestrator | 2025-05-14 03:15:33 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:15:33.830411 | orchestrator | 2025-05-14 03:15:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:15:36.876092 | orchestrator | 2025-05-14 03:15:36 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:15:36.876199 | orchestrator | 2025-05-14 03:15:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:15:39.922252 | orchestrator | 2025-05-14 03:15:39 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:15:39.922329 | orchestrator | 2025-05-14 03:15:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:15:42.976495 | orchestrator | 2025-05-14 03:15:42 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:15:42.976604 | orchestrator | 2025-05-14 03:15:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:15:46.033501 | orchestrator | 2025-05-14 03:15:46 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:15:46.033587 | orchestrator | 2025-05-14 03:15:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:15:49.082502 | orchestrator | 2025-05-14 03:15:49 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:15:49.082599 | orchestrator | 2025-05-14 03:15:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:15:52.135235 | orchestrator | 2025-05-14 03:15:52 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:15:52.135340 | orchestrator | 2025-05-14 03:15:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:15:55.182984 | orchestrator | 2025-05-14 03:15:55 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:15:55.183149 | orchestrator | 2025-05-14 03:15:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:15:58.224689 | orchestrator | 2025-05-14 03:15:58 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:15:58.224821 | orchestrator | 2025-05-14 03:15:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:16:01.279322 | orchestrator | 2025-05-14 03:16:01 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:16:01.279440 | orchestrator | 2025-05-14 03:16:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:16:04.329977 | orchestrator | 2025-05-14 03:16:04 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:16:04.330131 | orchestrator | 2025-05-14 03:16:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:16:07.374917 | orchestrator | 2025-05-14 03:16:07 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:16:07.375025 | orchestrator | 2025-05-14 03:16:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:16:10.427334 | orchestrator | 2025-05-14 03:16:10 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:16:10.427469 | orchestrator | 2025-05-14 03:16:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:16:13.485273 | orchestrator | 2025-05-14 03:16:13 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:16:13.485394 | orchestrator | 2025-05-14 03:16:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:16:16.522129 | orchestrator | 2025-05-14 03:16:16 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:16:16.522263 | orchestrator | 2025-05-14 03:16:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:16:19.564552 | orchestrator | 2025-05-14 03:16:19 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:16:19.564759 | orchestrator | 2025-05-14 03:16:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:16:22.613993 | orchestrator | 2025-05-14 03:16:22 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:16:22.614282 | orchestrator | 2025-05-14 03:16:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:16:25.666894 | orchestrator | 2025-05-14 03:16:25 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:16:25.667016 | orchestrator | 2025-05-14 03:16:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:16:28.719664 | orchestrator | 2025-05-14 03:16:28 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:16:28.719806 | orchestrator | 2025-05-14 03:16:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:16:31.767787 | orchestrator | 2025-05-14 03:16:31 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:16:31.767957 | orchestrator | 2025-05-14 03:16:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:16:34.812279 | orchestrator | 2025-05-14 03:16:34 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:16:34.812385 | orchestrator | 2025-05-14 03:16:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:16:37.855019 | orchestrator | 2025-05-14 03:16:37 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:16:37.855110 | orchestrator | 2025-05-14 03:16:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:16:40.908147 | orchestrator | 2025-05-14 03:16:40 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:16:40.908247 | orchestrator | 2025-05-14 03:16:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:16:43.958345 | orchestrator | 2025-05-14 03:16:43 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:16:43.958486 | orchestrator | 2025-05-14 03:16:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:16:47.007739 | orchestrator | 2025-05-14 03:16:47 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:16:47.007905 | orchestrator | 2025-05-14 03:16:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:16:50.069279 | orchestrator | 2025-05-14 03:16:50 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:16:50.069420 | orchestrator | 2025-05-14 03:16:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:16:53.115816 | orchestrator | 2025-05-14 03:16:53 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:16:53.115913 | orchestrator | 2025-05-14 03:16:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:16:56.153440 | orchestrator | 2025-05-14 03:16:56 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:16:56.153516 | orchestrator | 2025-05-14 03:16:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:16:59.198588 | orchestrator | 2025-05-14 03:16:59 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:16:59.198786 | orchestrator | 2025-05-14 03:16:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:17:02.237879 | orchestrator | 2025-05-14 03:17:02 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:17:02.237991 | orchestrator | 2025-05-14 03:17:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:17:05.281573 | orchestrator | 2025-05-14 03:17:05 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:17:05.281756 | orchestrator | 2025-05-14 03:17:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:17:08.330111 | orchestrator | 2025-05-14 03:17:08 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:17:08.330221 | orchestrator | 2025-05-14 03:17:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:17:11.384502 | orchestrator | 2025-05-14 03:17:11 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:17:11.384599 | orchestrator | 2025-05-14 03:17:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:17:14.426211 | orchestrator | 2025-05-14 03:17:14 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:17:14.426286 | orchestrator | 2025-05-14 03:17:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:17:17.468161 | orchestrator | 2025-05-14 03:17:17 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:17:17.468258 | orchestrator | 2025-05-14 03:17:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:17:20.516981 | orchestrator | 2025-05-14 03:17:20 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:17:20.517084 | orchestrator | 2025-05-14 03:17:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:17:23.561181 | orchestrator | 2025-05-14 03:17:23 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:17:23.561306 | orchestrator | 2025-05-14 03:17:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:17:26.610733 | orchestrator | 2025-05-14 03:17:26 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:17:26.610859 | orchestrator | 2025-05-14 03:17:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:17:29.661900 | orchestrator | 2025-05-14 03:17:29 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:17:29.662110 | orchestrator | 2025-05-14 03:17:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:17:32.709461 | orchestrator | 2025-05-14 03:17:32 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:17:32.709571 | orchestrator | 2025-05-14 03:17:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:17:35.760917 | orchestrator | 2025-05-14 03:17:35 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:17:35.761018 | orchestrator | 2025-05-14 03:17:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:17:38.808758 | orchestrator | 2025-05-14 03:17:38 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:17:38.808887 | orchestrator | 2025-05-14 03:17:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:17:41.853435 | orchestrator | 2025-05-14 03:17:41 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:17:41.853556 | orchestrator | 2025-05-14 03:17:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:17:44.898831 | orchestrator | 2025-05-14 03:17:44 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:17:44.898937 | orchestrator | 2025-05-14 03:17:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:17:47.947862 | orchestrator | 2025-05-14 03:17:47 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:17:47.947972 | orchestrator | 2025-05-14 03:17:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:17:50.998007 | orchestrator | 2025-05-14 03:17:50 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:17:50.998223 | orchestrator | 2025-05-14 03:17:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:17:54.044459 | orchestrator | 2025-05-14 03:17:54 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:17:54.044536 | orchestrator | 2025-05-14 03:17:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:17:57.084781 | orchestrator | 2025-05-14 03:17:57 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:17:57.084893 | orchestrator | 2025-05-14 03:17:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:18:00.131344 | orchestrator | 2025-05-14 03:18:00 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:18:00.131442 | orchestrator | 2025-05-14 03:18:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:18:03.181111 | orchestrator | 2025-05-14 03:18:03 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:18:03.181242 | orchestrator | 2025-05-14 03:18:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:18:06.234293 | orchestrator | 2025-05-14 03:18:06 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:18:06.234403 | orchestrator | 2025-05-14 03:18:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:18:09.281095 | orchestrator | 2025-05-14 03:18:09 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:18:09.281181 | orchestrator | 2025-05-14 03:18:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:18:12.340988 | orchestrator | 2025-05-14 03:18:12 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:18:12.341094 | orchestrator | 2025-05-14 03:18:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:18:15.389465 | orchestrator | 2025-05-14 03:18:15 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:18:15.389565 | orchestrator | 2025-05-14 03:18:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:18:18.441309 | orchestrator | 2025-05-14 03:18:18 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:18:18.441381 | orchestrator | 2025-05-14 03:18:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:18:21.493310 | orchestrator | 2025-05-14 03:18:21 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:18:21.493415 | orchestrator | 2025-05-14 03:18:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:18:24.540014 | orchestrator | 2025-05-14 03:18:24 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:18:24.540173 | orchestrator | 2025-05-14 03:18:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:18:27.584055 | orchestrator | 2025-05-14 03:18:27 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:18:27.584151 | orchestrator | 2025-05-14 03:18:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:18:30.623133 | orchestrator | 2025-05-14 03:18:30 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:18:30.623217 | orchestrator | 2025-05-14 03:18:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:18:33.674444 | orchestrator | 2025-05-14 03:18:33 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:18:33.674565 | orchestrator | 2025-05-14 03:18:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:18:36.729769 | orchestrator | 2025-05-14 03:18:36 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:18:36.729869 | orchestrator | 2025-05-14 03:18:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:18:39.783167 | orchestrator | 2025-05-14 03:18:39 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:18:39.783284 | orchestrator | 2025-05-14 03:18:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:18:42.839091 | orchestrator | 2025-05-14 03:18:42 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:18:42.839222 | orchestrator | 2025-05-14 03:18:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:18:45.892975 | orchestrator | 2025-05-14 03:18:45 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:18:45.893086 | orchestrator | 2025-05-14 03:18:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:18:48.940411 | orchestrator | 2025-05-14 03:18:48 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:18:48.940507 | orchestrator | 2025-05-14 03:18:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:18:51.993351 | orchestrator | 2025-05-14 03:18:51 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:18:51.993433 | orchestrator | 2025-05-14 03:18:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:18:55.049878 | orchestrator | 2025-05-14 03:18:55 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:18:55.049988 | orchestrator | 2025-05-14 03:18:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:18:58.093197 | orchestrator | 2025-05-14 03:18:58 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:18:58.093327 | orchestrator | 2025-05-14 03:18:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:19:01.140803 | orchestrator | 2025-05-14 03:19:01 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:19:01.140932 | orchestrator | 2025-05-14 03:19:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:19:04.181754 | orchestrator | 2025-05-14 03:19:04 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:19:04.181921 | orchestrator | 2025-05-14 03:19:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:19:07.227085 | orchestrator | 2025-05-14 03:19:07 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:19:07.227221 | orchestrator | 2025-05-14 03:19:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:19:10.276219 | orchestrator | 2025-05-14 03:19:10 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:19:10.276381 | orchestrator | 2025-05-14 03:19:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:19:13.330657 | orchestrator | 2025-05-14 03:19:13 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:19:13.330766 | orchestrator | 2025-05-14 03:19:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:19:16.375924 | orchestrator | 2025-05-14 03:19:16 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:19:16.376034 | orchestrator | 2025-05-14 03:19:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:19:19.422328 | orchestrator | 2025-05-14 03:19:19 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:19:19.422432 | orchestrator | 2025-05-14 03:19:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:19:22.464815 | orchestrator | 2025-05-14 03:19:22 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:19:22.464920 | orchestrator | 2025-05-14 03:19:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:19:25.513494 | orchestrator | 2025-05-14 03:19:25 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:19:25.513660 | orchestrator | 2025-05-14 03:19:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:19:28.559935 | orchestrator | 2025-05-14 03:19:28 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:19:28.560065 | orchestrator | 2025-05-14 03:19:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:19:31.607044 | orchestrator | 2025-05-14 03:19:31 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:19:31.607131 | orchestrator | 2025-05-14 03:19:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:19:34.664070 | orchestrator | 2025-05-14 03:19:34 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:19:34.664202 | orchestrator | 2025-05-14 03:19:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:19:37.714701 | orchestrator | 2025-05-14 03:19:37 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:19:37.714836 | orchestrator | 2025-05-14 03:19:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:19:40.766467 | orchestrator | 2025-05-14 03:19:40 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:19:40.766638 | orchestrator | 2025-05-14 03:19:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:19:43.815831 | orchestrator | 2025-05-14 03:19:43 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:19:43.815928 | orchestrator | 2025-05-14 03:19:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:19:46.866818 | orchestrator | 2025-05-14 03:19:46 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:19:46.866931 | orchestrator | 2025-05-14 03:19:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:19:49.921070 | orchestrator | 2025-05-14 03:19:49 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:19:49.921169 | orchestrator | 2025-05-14 03:19:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:19:52.972180 | orchestrator | 2025-05-14 03:19:52 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:19:52.972286 | orchestrator | 2025-05-14 03:19:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:19:56.024074 | orchestrator | 2025-05-14 03:19:56 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:19:56.024158 | orchestrator | 2025-05-14 03:19:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:19:59.070397 | orchestrator | 2025-05-14 03:19:59 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:19:59.070538 | orchestrator | 2025-05-14 03:19:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:20:02.112755 | orchestrator | 2025-05-14 03:20:02 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:20:02.112875 | orchestrator | 2025-05-14 03:20:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:20:05.150131 | orchestrator | 2025-05-14 03:20:05 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:20:05.150268 | orchestrator | 2025-05-14 03:20:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:20:08.198269 | orchestrator | 2025-05-14 03:20:08 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:20:08.198381 | orchestrator | 2025-05-14 03:20:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:20:11.234671 | orchestrator | 2025-05-14 03:20:11 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:20:11.234781 | orchestrator | 2025-05-14 03:20:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:20:14.280278 | orchestrator | 2025-05-14 03:20:14 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:20:14.280404 | orchestrator | 2025-05-14 03:20:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:20:17.330985 | orchestrator | 2025-05-14 03:20:17 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:20:17.331089 | orchestrator | 2025-05-14 03:20:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:20:20.385487 | orchestrator | 2025-05-14 03:20:20 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:20:20.385603 | orchestrator | 2025-05-14 03:20:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:20:23.436299 | orchestrator | 2025-05-14 03:20:23 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:20:23.436394 | orchestrator | 2025-05-14 03:20:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:20:26.485971 | orchestrator | 2025-05-14 03:20:26 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:20:26.486190 | orchestrator | 2025-05-14 03:20:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:20:29.535001 | orchestrator | 2025-05-14 03:20:29 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:20:29.535135 | orchestrator | 2025-05-14 03:20:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:20:32.580793 | orchestrator | 2025-05-14 03:20:32 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:20:32.580884 | orchestrator | 2025-05-14 03:20:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:20:35.627994 | orchestrator | 2025-05-14 03:20:35 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:20:35.628092 | orchestrator | 2025-05-14 03:20:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:20:38.679965 | orchestrator | 2025-05-14 03:20:38 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:20:38.680061 | orchestrator | 2025-05-14 03:20:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:20:41.730492 | orchestrator | 2025-05-14 03:20:41 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:20:41.730582 | orchestrator | 2025-05-14 03:20:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:20:44.779837 | orchestrator | 2025-05-14 03:20:44 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:20:44.779939 | orchestrator | 2025-05-14 03:20:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:20:47.829648 | orchestrator | 2025-05-14 03:20:47 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:20:47.829780 | orchestrator | 2025-05-14 03:20:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:20:50.880114 | orchestrator | 2025-05-14 03:20:50 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:20:50.880238 | orchestrator | 2025-05-14 03:20:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:20:53.931931 | orchestrator | 2025-05-14 03:20:53 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:20:53.932070 | orchestrator | 2025-05-14 03:20:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:20:56.978269 | orchestrator | 2025-05-14 03:20:56 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:20:56.978388 | orchestrator | 2025-05-14 03:20:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:21:00.020105 | orchestrator | 2025-05-14 03:21:00 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:21:00.020213 | orchestrator | 2025-05-14 03:21:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:21:03.065013 | orchestrator | 2025-05-14 03:21:03 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:21:03.065122 | orchestrator | 2025-05-14 03:21:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:21:06.104544 | orchestrator | 2025-05-14 03:21:06 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:21:06.104705 | orchestrator | 2025-05-14 03:21:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:21:09.147720 | orchestrator | 2025-05-14 03:21:09 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:21:09.147842 | orchestrator | 2025-05-14 03:21:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:21:12.196654 | orchestrator | 2025-05-14 03:21:12 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:21:12.196788 | orchestrator | 2025-05-14 03:21:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:21:15.244322 | orchestrator | 2025-05-14 03:21:15 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:21:15.244547 | orchestrator | 2025-05-14 03:21:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:21:18.288589 | orchestrator | 2025-05-14 03:21:18 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:21:18.288694 | orchestrator | 2025-05-14 03:21:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:21:21.332740 | orchestrator | 2025-05-14 03:21:21 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:21:21.332875 | orchestrator | 2025-05-14 03:21:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:21:24.378737 | orchestrator | 2025-05-14 03:21:24 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:21:24.378873 | orchestrator | 2025-05-14 03:21:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:21:27.418799 | orchestrator | 2025-05-14 03:21:27 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:21:27.418922 | orchestrator | 2025-05-14 03:21:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:21:30.466694 | orchestrator | 2025-05-14 03:21:30 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:21:30.466798 | orchestrator | 2025-05-14 03:21:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:21:33.511552 | orchestrator | 2025-05-14 03:21:33 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:21:33.511656 | orchestrator | 2025-05-14 03:21:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:21:36.559474 | orchestrator | 2025-05-14 03:21:36 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:21:36.559607 | orchestrator | 2025-05-14 03:21:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:21:39.609498 | orchestrator | 2025-05-14 03:21:39 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:21:39.609703 | orchestrator | 2025-05-14 03:21:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:21:42.655682 | orchestrator | 2025-05-14 03:21:42 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:21:42.655776 | orchestrator | 2025-05-14 03:21:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:21:45.705441 | orchestrator | 2025-05-14 03:21:45 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:21:45.705598 | orchestrator | 2025-05-14 03:21:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:21:48.759441 | orchestrator | 2025-05-14 03:21:48 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:21:48.759596 | orchestrator | 2025-05-14 03:21:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:21:51.810816 | orchestrator | 2025-05-14 03:21:51 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:21:51.810922 | orchestrator | 2025-05-14 03:21:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:21:54.860740 | orchestrator | 2025-05-14 03:21:54 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:21:54.860830 | orchestrator | 2025-05-14 03:21:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:21:57.904566 | orchestrator | 2025-05-14 03:21:57 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:21:57.904671 | orchestrator | 2025-05-14 03:21:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:22:00.959965 | orchestrator | 2025-05-14 03:22:00 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:22:00.960072 | orchestrator | 2025-05-14 03:22:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:22:04.013822 | orchestrator | 2025-05-14 03:22:04 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:22:04.013934 | orchestrator | 2025-05-14 03:22:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:22:07.063618 | orchestrator | 2025-05-14 03:22:07 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:22:07.063727 | orchestrator | 2025-05-14 03:22:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:22:10.105600 | orchestrator | 2025-05-14 03:22:10 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:22:10.105692 | orchestrator | 2025-05-14 03:22:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:22:13.152111 | orchestrator | 2025-05-14 03:22:13 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:22:13.152222 | orchestrator | 2025-05-14 03:22:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:22:16.192046 | orchestrator | 2025-05-14 03:22:16 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:22:16.192150 | orchestrator | 2025-05-14 03:22:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:22:19.233111 | orchestrator | 2025-05-14 03:22:19 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:22:19.233291 | orchestrator | 2025-05-14 03:22:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:22:22.280283 | orchestrator | 2025-05-14 03:22:22 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:22:22.280458 | orchestrator | 2025-05-14 03:22:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:22:25.323815 | orchestrator | 2025-05-14 03:22:25 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:22:25.323952 | orchestrator | 2025-05-14 03:22:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:22:28.374923 | orchestrator | 2025-05-14 03:22:28 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:22:28.375019 | orchestrator | 2025-05-14 03:22:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:22:31.425390 | orchestrator | 2025-05-14 03:22:31 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:22:31.425487 | orchestrator | 2025-05-14 03:22:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:22:34.478758 | orchestrator | 2025-05-14 03:22:34 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:22:34.478860 | orchestrator | 2025-05-14 03:22:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:22:37.513808 | orchestrator | 2025-05-14 03:22:37 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:22:37.513940 | orchestrator | 2025-05-14 03:22:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:22:40.564327 | orchestrator | 2025-05-14 03:22:40 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:22:40.564446 | orchestrator | 2025-05-14 03:22:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:22:43.616341 | orchestrator | 2025-05-14 03:22:43 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:22:43.616446 | orchestrator | 2025-05-14 03:22:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:22:46.664908 | orchestrator | 2025-05-14 03:22:46 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:22:46.665078 | orchestrator | 2025-05-14 03:22:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:22:49.711064 | orchestrator | 2025-05-14 03:22:49 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:22:49.711208 | orchestrator | 2025-05-14 03:22:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:22:52.748466 | orchestrator | 2025-05-14 03:22:52 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:22:52.748561 | orchestrator | 2025-05-14 03:22:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:22:55.791946 | orchestrator | 2025-05-14 03:22:55 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:22:55.792061 | orchestrator | 2025-05-14 03:22:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:22:58.838812 | orchestrator | 2025-05-14 03:22:58 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:22:58.839026 | orchestrator | 2025-05-14 03:22:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:23:01.891652 | orchestrator | 2025-05-14 03:23:01 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:23:01.891811 | orchestrator | 2025-05-14 03:23:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:23:04.942010 | orchestrator | 2025-05-14 03:23:04 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:23:04.942236 | orchestrator | 2025-05-14 03:23:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:23:07.996429 | orchestrator | 2025-05-14 03:23:07 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:23:07.996576 | orchestrator | 2025-05-14 03:23:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:23:11.048531 | orchestrator | 2025-05-14 03:23:11 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:23:11.048691 | orchestrator | 2025-05-14 03:23:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:23:14.088826 | orchestrator | 2025-05-14 03:23:14 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:23:14.088904 | orchestrator | 2025-05-14 03:23:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:23:17.140639 | orchestrator | 2025-05-14 03:23:17 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:23:17.140714 | orchestrator | 2025-05-14 03:23:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:23:20.180369 | orchestrator | 2025-05-14 03:23:20 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:23:20.180500 | orchestrator | 2025-05-14 03:23:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:23:23.224363 | orchestrator | 2025-05-14 03:23:23 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:23:23.224474 | orchestrator | 2025-05-14 03:23:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:23:26.281377 | orchestrator | 2025-05-14 03:23:26 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:23:26.281628 | orchestrator | 2025-05-14 03:23:26 | INFO  | Task d13ce13d-522b-40a4-ad8c-e1240691cec4 is in state STARTED 2025-05-14 03:23:26.281652 | orchestrator | 2025-05-14 03:23:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:23:29.339283 | orchestrator | 2025-05-14 03:23:29 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:23:29.339467 | orchestrator | 2025-05-14 03:23:29 | INFO  | Task d13ce13d-522b-40a4-ad8c-e1240691cec4 is in state STARTED 2025-05-14 03:23:29.339486 | orchestrator | 2025-05-14 03:23:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:23:32.391772 | orchestrator | 2025-05-14 03:23:32 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:23:32.394126 | orchestrator | 2025-05-14 03:23:32 | INFO  | Task d13ce13d-522b-40a4-ad8c-e1240691cec4 is in state STARTED 2025-05-14 03:23:32.394181 | orchestrator | 2025-05-14 03:23:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:23:35.439894 | orchestrator | 2025-05-14 03:23:35 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:23:35.440080 | orchestrator | 2025-05-14 03:23:35 | INFO  | Task d13ce13d-522b-40a4-ad8c-e1240691cec4 is in state SUCCESS 2025-05-14 03:23:35.440101 | orchestrator | 2025-05-14 03:23:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:23:38.481766 | orchestrator | 2025-05-14 03:23:38 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:23:38.481851 | orchestrator | 2025-05-14 03:23:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:23:41.532428 | orchestrator | 2025-05-14 03:23:41 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:23:41.532521 | orchestrator | 2025-05-14 03:23:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:23:44.579798 | orchestrator | 2025-05-14 03:23:44 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:23:44.579907 | orchestrator | 2025-05-14 03:23:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:23:47.628665 | orchestrator | 2025-05-14 03:23:47 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:23:47.628770 | orchestrator | 2025-05-14 03:23:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:23:50.678095 | orchestrator | 2025-05-14 03:23:50 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:23:50.678273 | orchestrator | 2025-05-14 03:23:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:23:53.726539 | orchestrator | 2025-05-14 03:23:53 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:23:53.726625 | orchestrator | 2025-05-14 03:23:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:23:56.777737 | orchestrator | 2025-05-14 03:23:56 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:23:56.777817 | orchestrator | 2025-05-14 03:23:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:23:59.825521 | orchestrator | 2025-05-14 03:23:59 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:23:59.825628 | orchestrator | 2025-05-14 03:23:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:24:02.879382 | orchestrator | 2025-05-14 03:24:02 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:24:02.879484 | orchestrator | 2025-05-14 03:24:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:24:05.930145 | orchestrator | 2025-05-14 03:24:05 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:24:05.930341 | orchestrator | 2025-05-14 03:24:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:24:08.978131 | orchestrator | 2025-05-14 03:24:08 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:24:08.978326 | orchestrator | 2025-05-14 03:24:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:24:12.029592 | orchestrator | 2025-05-14 03:24:12 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:24:12.029697 | orchestrator | 2025-05-14 03:24:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:24:15.078280 | orchestrator | 2025-05-14 03:24:15 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:24:15.078385 | orchestrator | 2025-05-14 03:24:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:24:18.117588 | orchestrator | 2025-05-14 03:24:18 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:24:18.117676 | orchestrator | 2025-05-14 03:24:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:24:21.160626 | orchestrator | 2025-05-14 03:24:21 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:24:21.160727 | orchestrator | 2025-05-14 03:24:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:24:24.205792 | orchestrator | 2025-05-14 03:24:24 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:24:24.205907 | orchestrator | 2025-05-14 03:24:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:24:27.244043 | orchestrator | 2025-05-14 03:24:27 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:24:27.244149 | orchestrator | 2025-05-14 03:24:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:24:30.292976 | orchestrator | 2025-05-14 03:24:30 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:24:30.293082 | orchestrator | 2025-05-14 03:24:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:24:33.344383 | orchestrator | 2025-05-14 03:24:33 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:24:33.344521 | orchestrator | 2025-05-14 03:24:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:24:36.386709 | orchestrator | 2025-05-14 03:24:36 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:24:36.386841 | orchestrator | 2025-05-14 03:24:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:24:39.436675 | orchestrator | 2025-05-14 03:24:39 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:24:39.436816 | orchestrator | 2025-05-14 03:24:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:24:42.486401 | orchestrator | 2025-05-14 03:24:42 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:24:42.486528 | orchestrator | 2025-05-14 03:24:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:24:45.528425 | orchestrator | 2025-05-14 03:24:45 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:24:45.528553 | orchestrator | 2025-05-14 03:24:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:24:48.575675 | orchestrator | 2025-05-14 03:24:48 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:24:48.575802 | orchestrator | 2025-05-14 03:24:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:24:51.635273 | orchestrator | 2025-05-14 03:24:51 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:24:51.635402 | orchestrator | 2025-05-14 03:24:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:24:54.679532 | orchestrator | 2025-05-14 03:24:54 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:24:54.679641 | orchestrator | 2025-05-14 03:24:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:24:57.740861 | orchestrator | 2025-05-14 03:24:57 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:24:57.740951 | orchestrator | 2025-05-14 03:24:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:25:00.800955 | orchestrator | 2025-05-14 03:25:00 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:25:00.801088 | orchestrator | 2025-05-14 03:25:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:25:03.857858 | orchestrator | 2025-05-14 03:25:03 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:25:03.857968 | orchestrator | 2025-05-14 03:25:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:25:06.908787 | orchestrator | 2025-05-14 03:25:06 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:25:06.908889 | orchestrator | 2025-05-14 03:25:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:25:09.954637 | orchestrator | 2025-05-14 03:25:09 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:25:09.954745 | orchestrator | 2025-05-14 03:25:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:25:13.006205 | orchestrator | 2025-05-14 03:25:13 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:25:13.006312 | orchestrator | 2025-05-14 03:25:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:25:16.051678 | orchestrator | 2025-05-14 03:25:16 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:25:16.051811 | orchestrator | 2025-05-14 03:25:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:25:19.094636 | orchestrator | 2025-05-14 03:25:19 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:25:19.094744 | orchestrator | 2025-05-14 03:25:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:25:22.135794 | orchestrator | 2025-05-14 03:25:22 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:25:22.135919 | orchestrator | 2025-05-14 03:25:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:25:25.174812 | orchestrator | 2025-05-14 03:25:25 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:25:25.174914 | orchestrator | 2025-05-14 03:25:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:25:28.222517 | orchestrator | 2025-05-14 03:25:28 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:25:28.222626 | orchestrator | 2025-05-14 03:25:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:25:31.277272 | orchestrator | 2025-05-14 03:25:31 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:25:31.277350 | orchestrator | 2025-05-14 03:25:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:25:34.326134 | orchestrator | 2025-05-14 03:25:34 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:25:34.326280 | orchestrator | 2025-05-14 03:25:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:25:37.376492 | orchestrator | 2025-05-14 03:25:37 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:25:37.376584 | orchestrator | 2025-05-14 03:25:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:25:40.429074 | orchestrator | 2025-05-14 03:25:40 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:25:40.429215 | orchestrator | 2025-05-14 03:25:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:25:43.482340 | orchestrator | 2025-05-14 03:25:43 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:25:43.482419 | orchestrator | 2025-05-14 03:25:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:25:46.537839 | orchestrator | 2025-05-14 03:25:46 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:25:46.537968 | orchestrator | 2025-05-14 03:25:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:25:49.583498 | orchestrator | 2025-05-14 03:25:49 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:25:49.583614 | orchestrator | 2025-05-14 03:25:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:25:52.637936 | orchestrator | 2025-05-14 03:25:52 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:25:52.638095 | orchestrator | 2025-05-14 03:25:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:25:55.687718 | orchestrator | 2025-05-14 03:25:55 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:25:55.687808 | orchestrator | 2025-05-14 03:25:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:25:58.736178 | orchestrator | 2025-05-14 03:25:58 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:25:58.736289 | orchestrator | 2025-05-14 03:25:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:26:01.786746 | orchestrator | 2025-05-14 03:26:01 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:26:01.786827 | orchestrator | 2025-05-14 03:26:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:26:04.842850 | orchestrator | 2025-05-14 03:26:04 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:26:04.842956 | orchestrator | 2025-05-14 03:26:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:26:07.882628 | orchestrator | 2025-05-14 03:26:07 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:26:07.882729 | orchestrator | 2025-05-14 03:26:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:26:10.934501 | orchestrator | 2025-05-14 03:26:10 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:26:10.934607 | orchestrator | 2025-05-14 03:26:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:26:13.988062 | orchestrator | 2025-05-14 03:26:13 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:26:13.988194 | orchestrator | 2025-05-14 03:26:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:26:17.049104 | orchestrator | 2025-05-14 03:26:17 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:26:17.049248 | orchestrator | 2025-05-14 03:26:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:26:20.092880 | orchestrator | 2025-05-14 03:26:20 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:26:20.092982 | orchestrator | 2025-05-14 03:26:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:26:23.140626 | orchestrator | 2025-05-14 03:26:23 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:26:23.140777 | orchestrator | 2025-05-14 03:26:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:26:26.187420 | orchestrator | 2025-05-14 03:26:26 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:26:26.187528 | orchestrator | 2025-05-14 03:26:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:26:29.233604 | orchestrator | 2025-05-14 03:26:29 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:26:29.233710 | orchestrator | 2025-05-14 03:26:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:26:32.285611 | orchestrator | 2025-05-14 03:26:32 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:26:32.285710 | orchestrator | 2025-05-14 03:26:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:26:35.335710 | orchestrator | 2025-05-14 03:26:35 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:26:35.335844 | orchestrator | 2025-05-14 03:26:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:26:38.382536 | orchestrator | 2025-05-14 03:26:38 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:26:38.382626 | orchestrator | 2025-05-14 03:26:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:26:41.424579 | orchestrator | 2025-05-14 03:26:41 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:26:41.424674 | orchestrator | 2025-05-14 03:26:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:26:44.471414 | orchestrator | 2025-05-14 03:26:44 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:26:44.471490 | orchestrator | 2025-05-14 03:26:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:26:47.522708 | orchestrator | 2025-05-14 03:26:47 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:26:47.522808 | orchestrator | 2025-05-14 03:26:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:26:50.568215 | orchestrator | 2025-05-14 03:26:50 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:26:50.568320 | orchestrator | 2025-05-14 03:26:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:26:53.612331 | orchestrator | 2025-05-14 03:26:53 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:26:53.612455 | orchestrator | 2025-05-14 03:26:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:26:56.667764 | orchestrator | 2025-05-14 03:26:56 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:26:56.667866 | orchestrator | 2025-05-14 03:26:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:26:59.714403 | orchestrator | 2025-05-14 03:26:59 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:26:59.714498 | orchestrator | 2025-05-14 03:26:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:27:02.763059 | orchestrator | 2025-05-14 03:27:02 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:27:02.763224 | orchestrator | 2025-05-14 03:27:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:27:05.813744 | orchestrator | 2025-05-14 03:27:05 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:27:05.813866 | orchestrator | 2025-05-14 03:27:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:27:08.867162 | orchestrator | 2025-05-14 03:27:08 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:27:08.867271 | orchestrator | 2025-05-14 03:27:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:27:11.923076 | orchestrator | 2025-05-14 03:27:11 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:27:11.923279 | orchestrator | 2025-05-14 03:27:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:27:14.973185 | orchestrator | 2025-05-14 03:27:14 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:27:14.973284 | orchestrator | 2025-05-14 03:27:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:27:18.018402 | orchestrator | 2025-05-14 03:27:18 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:27:18.018537 | orchestrator | 2025-05-14 03:27:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:27:21.055471 | orchestrator | 2025-05-14 03:27:21 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:27:21.055601 | orchestrator | 2025-05-14 03:27:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:27:24.099413 | orchestrator | 2025-05-14 03:27:24 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:27:24.099519 | orchestrator | 2025-05-14 03:27:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:27:27.146420 | orchestrator | 2025-05-14 03:27:27 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:27:27.146519 | orchestrator | 2025-05-14 03:27:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:27:30.195390 | orchestrator | 2025-05-14 03:27:30 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:27:30.195496 | orchestrator | 2025-05-14 03:27:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:27:33.249523 | orchestrator | 2025-05-14 03:27:33 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:27:33.249625 | orchestrator | 2025-05-14 03:27:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:27:36.296641 | orchestrator | 2025-05-14 03:27:36 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:27:36.296747 | orchestrator | 2025-05-14 03:27:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:27:39.341428 | orchestrator | 2025-05-14 03:27:39 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:27:39.341528 | orchestrator | 2025-05-14 03:27:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:27:42.390532 | orchestrator | 2025-05-14 03:27:42 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:27:42.390638 | orchestrator | 2025-05-14 03:27:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:27:45.445546 | orchestrator | 2025-05-14 03:27:45 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:27:45.445646 | orchestrator | 2025-05-14 03:27:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:27:48.500047 | orchestrator | 2025-05-14 03:27:48 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:27:48.500181 | orchestrator | 2025-05-14 03:27:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:27:51.557273 | orchestrator | 2025-05-14 03:27:51 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:27:51.557372 | orchestrator | 2025-05-14 03:27:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:27:54.604518 | orchestrator | 2025-05-14 03:27:54 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:27:54.604638 | orchestrator | 2025-05-14 03:27:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:27:57.650966 | orchestrator | 2025-05-14 03:27:57 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:27:57.651068 | orchestrator | 2025-05-14 03:27:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:28:00.695795 | orchestrator | 2025-05-14 03:28:00 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:28:00.695933 | orchestrator | 2025-05-14 03:28:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:28:03.744635 | orchestrator | 2025-05-14 03:28:03 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:28:03.744804 | orchestrator | 2025-05-14 03:28:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:28:06.798181 | orchestrator | 2025-05-14 03:28:06 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:28:06.798294 | orchestrator | 2025-05-14 03:28:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:28:09.851815 | orchestrator | 2025-05-14 03:28:09 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:28:09.851953 | orchestrator | 2025-05-14 03:28:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:28:12.901095 | orchestrator | 2025-05-14 03:28:12 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:28:12.901232 | orchestrator | 2025-05-14 03:28:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:28:15.948954 | orchestrator | 2025-05-14 03:28:15 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:28:15.949030 | orchestrator | 2025-05-14 03:28:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:28:18.995449 | orchestrator | 2025-05-14 03:28:18 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:28:18.995558 | orchestrator | 2025-05-14 03:28:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:28:22.059609 | orchestrator | 2025-05-14 03:28:22 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:28:22.059734 | orchestrator | 2025-05-14 03:28:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:28:25.107463 | orchestrator | 2025-05-14 03:28:25 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:28:25.107612 | orchestrator | 2025-05-14 03:28:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:28:28.146241 | orchestrator | 2025-05-14 03:28:28 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:28:28.146343 | orchestrator | 2025-05-14 03:28:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:28:31.184184 | orchestrator | 2025-05-14 03:28:31 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:28:31.184354 | orchestrator | 2025-05-14 03:28:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:28:34.234350 | orchestrator | 2025-05-14 03:28:34 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:28:34.234463 | orchestrator | 2025-05-14 03:28:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:28:37.283069 | orchestrator | 2025-05-14 03:28:37 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:28:37.283304 | orchestrator | 2025-05-14 03:28:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:28:40.329257 | orchestrator | 2025-05-14 03:28:40 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:28:40.329343 | orchestrator | 2025-05-14 03:28:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:28:43.380247 | orchestrator | 2025-05-14 03:28:43 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:28:43.380355 | orchestrator | 2025-05-14 03:28:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:28:46.429237 | orchestrator | 2025-05-14 03:28:46 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:28:46.429355 | orchestrator | 2025-05-14 03:28:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:28:49.481741 | orchestrator | 2025-05-14 03:28:49 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:28:49.481813 | orchestrator | 2025-05-14 03:28:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:28:52.543852 | orchestrator | 2025-05-14 03:28:52 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:28:52.543984 | orchestrator | 2025-05-14 03:28:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:28:55.594961 | orchestrator | 2025-05-14 03:28:55 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:28:55.595065 | orchestrator | 2025-05-14 03:28:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:28:58.651013 | orchestrator | 2025-05-14 03:28:58 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:28:58.651175 | orchestrator | 2025-05-14 03:28:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:29:01.705073 | orchestrator | 2025-05-14 03:29:01 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:29:01.705219 | orchestrator | 2025-05-14 03:29:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:29:04.755078 | orchestrator | 2025-05-14 03:29:04 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:29:04.755223 | orchestrator | 2025-05-14 03:29:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:29:07.810953 | orchestrator | 2025-05-14 03:29:07 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:29:07.811039 | orchestrator | 2025-05-14 03:29:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:29:10.859181 | orchestrator | 2025-05-14 03:29:10 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:29:10.859286 | orchestrator | 2025-05-14 03:29:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:29:13.909912 | orchestrator | 2025-05-14 03:29:13 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:29:13.909990 | orchestrator | 2025-05-14 03:29:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:29:16.959937 | orchestrator | 2025-05-14 03:29:16 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:29:16.960022 | orchestrator | 2025-05-14 03:29:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:29:20.014193 | orchestrator | 2025-05-14 03:29:20 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:29:20.014299 | orchestrator | 2025-05-14 03:29:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:29:23.063677 | orchestrator | 2025-05-14 03:29:23 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:29:23.063778 | orchestrator | 2025-05-14 03:29:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:29:26.110160 | orchestrator | 2025-05-14 03:29:26 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:29:26.110261 | orchestrator | 2025-05-14 03:29:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:29:29.155226 | orchestrator | 2025-05-14 03:29:29 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:29:29.155302 | orchestrator | 2025-05-14 03:29:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:29:32.196684 | orchestrator | 2025-05-14 03:29:32 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:29:32.196785 | orchestrator | 2025-05-14 03:29:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:29:35.246461 | orchestrator | 2025-05-14 03:29:35 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:29:35.246562 | orchestrator | 2025-05-14 03:29:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:29:38.294441 | orchestrator | 2025-05-14 03:29:38 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:29:38.294545 | orchestrator | 2025-05-14 03:29:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:29:41.342608 | orchestrator | 2025-05-14 03:29:41 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:29:41.342722 | orchestrator | 2025-05-14 03:29:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:29:44.393529 | orchestrator | 2025-05-14 03:29:44 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:29:44.393639 | orchestrator | 2025-05-14 03:29:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:29:47.440446 | orchestrator | 2025-05-14 03:29:47 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:29:47.440575 | orchestrator | 2025-05-14 03:29:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:29:50.485172 | orchestrator | 2025-05-14 03:29:50 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:29:50.485259 | orchestrator | 2025-05-14 03:29:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:29:53.530883 | orchestrator | 2025-05-14 03:29:53 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:29:53.530994 | orchestrator | 2025-05-14 03:29:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:29:56.580863 | orchestrator | 2025-05-14 03:29:56 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:29:56.580956 | orchestrator | 2025-05-14 03:29:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:29:59.634010 | orchestrator | 2025-05-14 03:29:59 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:29:59.634265 | orchestrator | 2025-05-14 03:29:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:30:02.682360 | orchestrator | 2025-05-14 03:30:02 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:30:02.682467 | orchestrator | 2025-05-14 03:30:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:30:05.730784 | orchestrator | 2025-05-14 03:30:05 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:30:05.730921 | orchestrator | 2025-05-14 03:30:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:30:08.780516 | orchestrator | 2025-05-14 03:30:08 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:30:08.780614 | orchestrator | 2025-05-14 03:30:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:30:11.831844 | orchestrator | 2025-05-14 03:30:11 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:30:11.831954 | orchestrator | 2025-05-14 03:30:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:30:14.878456 | orchestrator | 2025-05-14 03:30:14 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:30:14.878546 | orchestrator | 2025-05-14 03:30:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:30:17.931103 | orchestrator | 2025-05-14 03:30:17 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:30:17.931209 | orchestrator | 2025-05-14 03:30:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:30:20.976216 | orchestrator | 2025-05-14 03:30:20 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:30:20.976320 | orchestrator | 2025-05-14 03:30:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:30:24.028424 | orchestrator | 2025-05-14 03:30:24 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:30:24.028571 | orchestrator | 2025-05-14 03:30:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:30:27.078832 | orchestrator | 2025-05-14 03:30:27 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:30:27.078906 | orchestrator | 2025-05-14 03:30:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:30:30.120148 | orchestrator | 2025-05-14 03:30:30 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:30:30.120281 | orchestrator | 2025-05-14 03:30:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:30:33.161801 | orchestrator | 2025-05-14 03:30:33 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:30:33.161905 | orchestrator | 2025-05-14 03:30:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:30:36.207603 | orchestrator | 2025-05-14 03:30:36 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:30:36.207700 | orchestrator | 2025-05-14 03:30:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:30:39.253885 | orchestrator | 2025-05-14 03:30:39 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:30:39.253970 | orchestrator | 2025-05-14 03:30:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:30:42.302938 | orchestrator | 2025-05-14 03:30:42 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:30:42.303106 | orchestrator | 2025-05-14 03:30:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:30:45.355326 | orchestrator | 2025-05-14 03:30:45 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:30:45.355414 | orchestrator | 2025-05-14 03:30:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:30:48.409011 | orchestrator | 2025-05-14 03:30:48 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:30:48.409201 | orchestrator | 2025-05-14 03:30:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:30:51.448889 | orchestrator | 2025-05-14 03:30:51 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:30:51.448993 | orchestrator | 2025-05-14 03:30:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:30:54.493687 | orchestrator | 2025-05-14 03:30:54 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:30:54.493793 | orchestrator | 2025-05-14 03:30:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:30:57.538555 | orchestrator | 2025-05-14 03:30:57 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:30:57.538680 | orchestrator | 2025-05-14 03:30:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:31:00.597773 | orchestrator | 2025-05-14 03:31:00 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:31:00.597880 | orchestrator | 2025-05-14 03:31:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:31:03.645788 | orchestrator | 2025-05-14 03:31:03 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:31:03.645892 | orchestrator | 2025-05-14 03:31:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:31:06.697449 | orchestrator | 2025-05-14 03:31:06 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:31:06.697552 | orchestrator | 2025-05-14 03:31:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:31:09.746534 | orchestrator | 2025-05-14 03:31:09 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:31:09.746637 | orchestrator | 2025-05-14 03:31:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:31:12.799380 | orchestrator | 2025-05-14 03:31:12 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:31:12.799500 | orchestrator | 2025-05-14 03:31:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:31:15.850001 | orchestrator | 2025-05-14 03:31:15 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:31:15.850215 | orchestrator | 2025-05-14 03:31:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:31:18.902982 | orchestrator | 2025-05-14 03:31:18 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:31:18.903145 | orchestrator | 2025-05-14 03:31:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:31:21.952707 | orchestrator | 2025-05-14 03:31:21 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:31:21.952824 | orchestrator | 2025-05-14 03:31:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:31:25.013473 | orchestrator | 2025-05-14 03:31:25 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:31:25.013561 | orchestrator | 2025-05-14 03:31:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:31:28.057213 | orchestrator | 2025-05-14 03:31:28 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:31:28.057333 | orchestrator | 2025-05-14 03:31:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:31:31.103518 | orchestrator | 2025-05-14 03:31:31 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:31:31.103614 | orchestrator | 2025-05-14 03:31:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:31:34.143323 | orchestrator | 2025-05-14 03:31:34 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:31:34.143397 | orchestrator | 2025-05-14 03:31:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:31:37.179119 | orchestrator | 2025-05-14 03:31:37 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:31:37.179216 | orchestrator | 2025-05-14 03:31:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:31:40.225646 | orchestrator | 2025-05-14 03:31:40 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:31:40.225748 | orchestrator | 2025-05-14 03:31:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:31:43.275337 | orchestrator | 2025-05-14 03:31:43 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:31:43.275463 | orchestrator | 2025-05-14 03:31:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:31:46.319097 | orchestrator | 2025-05-14 03:31:46 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:31:46.319211 | orchestrator | 2025-05-14 03:31:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:31:49.365914 | orchestrator | 2025-05-14 03:31:49 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:31:49.366141 | orchestrator | 2025-05-14 03:31:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:31:52.410206 | orchestrator | 2025-05-14 03:31:52 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:31:52.410314 | orchestrator | 2025-05-14 03:31:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:31:55.450756 | orchestrator | 2025-05-14 03:31:55 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:31:55.450847 | orchestrator | 2025-05-14 03:31:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:31:58.496637 | orchestrator | 2025-05-14 03:31:58 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:31:58.496728 | orchestrator | 2025-05-14 03:31:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:32:01.543742 | orchestrator | 2025-05-14 03:32:01 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:32:01.543821 | orchestrator | 2025-05-14 03:32:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:32:04.587147 | orchestrator | 2025-05-14 03:32:04 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:32:04.587263 | orchestrator | 2025-05-14 03:32:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:32:07.631471 | orchestrator | 2025-05-14 03:32:07 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:32:07.631553 | orchestrator | 2025-05-14 03:32:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:32:10.683998 | orchestrator | 2025-05-14 03:32:10 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:32:10.684143 | orchestrator | 2025-05-14 03:32:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:32:13.727951 | orchestrator | 2025-05-14 03:32:13 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:32:13.728095 | orchestrator | 2025-05-14 03:32:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:32:16.782122 | orchestrator | 2025-05-14 03:32:16 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:32:16.782227 | orchestrator | 2025-05-14 03:32:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:32:19.824645 | orchestrator | 2025-05-14 03:32:19 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:32:19.824751 | orchestrator | 2025-05-14 03:32:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:32:22.865472 | orchestrator | 2025-05-14 03:32:22 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:32:22.865545 | orchestrator | 2025-05-14 03:32:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:32:25.922476 | orchestrator | 2025-05-14 03:32:25 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:32:25.922586 | orchestrator | 2025-05-14 03:32:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:32:28.970843 | orchestrator | 2025-05-14 03:32:28 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:32:28.970946 | orchestrator | 2025-05-14 03:32:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:32:32.018317 | orchestrator | 2025-05-14 03:32:32 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:32:32.018422 | orchestrator | 2025-05-14 03:32:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:32:35.060165 | orchestrator | 2025-05-14 03:32:35 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:32:35.060271 | orchestrator | 2025-05-14 03:32:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:32:38.099561 | orchestrator | 2025-05-14 03:32:38 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:32:38.099668 | orchestrator | 2025-05-14 03:32:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:32:41.137155 | orchestrator | 2025-05-14 03:32:41 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:32:41.137260 | orchestrator | 2025-05-14 03:32:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:32:44.177492 | orchestrator | 2025-05-14 03:32:44 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:32:44.177605 | orchestrator | 2025-05-14 03:32:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:32:47.219803 | orchestrator | 2025-05-14 03:32:47 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:32:47.219872 | orchestrator | 2025-05-14 03:32:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:32:50.265706 | orchestrator | 2025-05-14 03:32:50 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:32:50.265802 | orchestrator | 2025-05-14 03:32:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:32:53.309269 | orchestrator | 2025-05-14 03:32:53 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:32:53.309371 | orchestrator | 2025-05-14 03:32:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:32:56.353610 | orchestrator | 2025-05-14 03:32:56 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:32:56.353756 | orchestrator | 2025-05-14 03:32:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:32:59.400422 | orchestrator | 2025-05-14 03:32:59 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:32:59.400560 | orchestrator | 2025-05-14 03:32:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:33:02.457180 | orchestrator | 2025-05-14 03:33:02 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:33:02.457258 | orchestrator | 2025-05-14 03:33:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:33:05.507548 | orchestrator | 2025-05-14 03:33:05 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:33:05.507656 | orchestrator | 2025-05-14 03:33:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:33:08.555011 | orchestrator | 2025-05-14 03:33:08 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:33:08.555143 | orchestrator | 2025-05-14 03:33:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:33:11.602954 | orchestrator | 2025-05-14 03:33:11 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:33:11.603074 | orchestrator | 2025-05-14 03:33:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:33:14.653171 | orchestrator | 2025-05-14 03:33:14 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:33:14.653292 | orchestrator | 2025-05-14 03:33:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:33:17.717096 | orchestrator | 2025-05-14 03:33:17 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:33:17.717219 | orchestrator | 2025-05-14 03:33:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:33:20.758537 | orchestrator | 2025-05-14 03:33:20 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:33:20.758641 | orchestrator | 2025-05-14 03:33:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:33:23.816812 | orchestrator | 2025-05-14 03:33:23 | INFO  | Task edcf9ec6-402e-4f37-93bd-46a4ef16bfd4 is in state STARTED 2025-05-14 03:33:23.816892 | orchestrator | 2025-05-14 03:33:23 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:33:23.816901 | orchestrator | 2025-05-14 03:33:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:33:26.880520 | orchestrator | 2025-05-14 03:33:26 | INFO  | Task edcf9ec6-402e-4f37-93bd-46a4ef16bfd4 is in state STARTED 2025-05-14 03:33:26.882321 | orchestrator | 2025-05-14 03:33:26 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:33:26.882479 | orchestrator | 2025-05-14 03:33:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:33:29.936594 | orchestrator | 2025-05-14 03:33:29 | INFO  | Task edcf9ec6-402e-4f37-93bd-46a4ef16bfd4 is in state STARTED 2025-05-14 03:33:29.937423 | orchestrator | 2025-05-14 03:33:29 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:33:29.937497 | orchestrator | 2025-05-14 03:33:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:33:32.995304 | orchestrator | 2025-05-14 03:33:32 | INFO  | Task edcf9ec6-402e-4f37-93bd-46a4ef16bfd4 is in state STARTED 2025-05-14 03:33:32.996149 | orchestrator | 2025-05-14 03:33:32 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:33:32.996255 | orchestrator | 2025-05-14 03:33:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:33:36.053967 | orchestrator | 2025-05-14 03:33:36 | INFO  | Task edcf9ec6-402e-4f37-93bd-46a4ef16bfd4 is in state SUCCESS 2025-05-14 03:33:36.055890 | orchestrator | 2025-05-14 03:33:36 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:33:36.055913 | orchestrator | 2025-05-14 03:33:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:33:39.099272 | orchestrator | 2025-05-14 03:33:39 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:33:39.099376 | orchestrator | 2025-05-14 03:33:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:33:42.142873 | orchestrator | 2025-05-14 03:33:42 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:33:42.142969 | orchestrator | 2025-05-14 03:33:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:33:45.192526 | orchestrator | 2025-05-14 03:33:45 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:33:45.192633 | orchestrator | 2025-05-14 03:33:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:33:48.231431 | orchestrator | 2025-05-14 03:33:48 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:33:48.231550 | orchestrator | 2025-05-14 03:33:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:33:51.284508 | orchestrator | 2025-05-14 03:33:51 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:33:51.284599 | orchestrator | 2025-05-14 03:33:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:33:54.331751 | orchestrator | 2025-05-14 03:33:54 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:33:54.331969 | orchestrator | 2025-05-14 03:33:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:33:57.381004 | orchestrator | 2025-05-14 03:33:57 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:33:57.381160 | orchestrator | 2025-05-14 03:33:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:34:00.435268 | orchestrator | 2025-05-14 03:34:00 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:34:00.435337 | orchestrator | 2025-05-14 03:34:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:34:03.483561 | orchestrator | 2025-05-14 03:34:03 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:34:03.483672 | orchestrator | 2025-05-14 03:34:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:34:06.537633 | orchestrator | 2025-05-14 03:34:06 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:34:06.537700 | orchestrator | 2025-05-14 03:34:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:34:09.579685 | orchestrator | 2025-05-14 03:34:09 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:34:09.579806 | orchestrator | 2025-05-14 03:34:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:34:12.628240 | orchestrator | 2025-05-14 03:34:12 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:34:12.628413 | orchestrator | 2025-05-14 03:34:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:34:15.677956 | orchestrator | 2025-05-14 03:34:15 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:34:15.678198 | orchestrator | 2025-05-14 03:34:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:34:18.725576 | orchestrator | 2025-05-14 03:34:18 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:34:18.725682 | orchestrator | 2025-05-14 03:34:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:34:21.775571 | orchestrator | 2025-05-14 03:34:21 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:34:21.775674 | orchestrator | 2025-05-14 03:34:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:34:24.821309 | orchestrator | 2025-05-14 03:34:24 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:34:24.821422 | orchestrator | 2025-05-14 03:34:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:34:27.874341 | orchestrator | 2025-05-14 03:34:27 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:34:27.874474 | orchestrator | 2025-05-14 03:34:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:34:30.919473 | orchestrator | 2025-05-14 03:34:30 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:34:30.919565 | orchestrator | 2025-05-14 03:34:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:34:33.970166 | orchestrator | 2025-05-14 03:34:33 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:34:33.970294 | orchestrator | 2025-05-14 03:34:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:34:37.023333 | orchestrator | 2025-05-14 03:34:37 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:34:37.023465 | orchestrator | 2025-05-14 03:34:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:34:40.063658 | orchestrator | 2025-05-14 03:34:40 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:34:40.063829 | orchestrator | 2025-05-14 03:34:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:34:43.101570 | orchestrator | 2025-05-14 03:34:43 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:34:43.101673 | orchestrator | 2025-05-14 03:34:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:34:46.144114 | orchestrator | 2025-05-14 03:34:46 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:34:46.144201 | orchestrator | 2025-05-14 03:34:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:34:49.193301 | orchestrator | 2025-05-14 03:34:49 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:34:49.193406 | orchestrator | 2025-05-14 03:34:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:34:52.244167 | orchestrator | 2025-05-14 03:34:52 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:34:52.244297 | orchestrator | 2025-05-14 03:34:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:34:55.286317 | orchestrator | 2025-05-14 03:34:55 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:34:55.286415 | orchestrator | 2025-05-14 03:34:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:34:58.329920 | orchestrator | 2025-05-14 03:34:58 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:34:58.330159 | orchestrator | 2025-05-14 03:34:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:35:01.379471 | orchestrator | 2025-05-14 03:35:01 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:35:01.379573 | orchestrator | 2025-05-14 03:35:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:35:04.425699 | orchestrator | 2025-05-14 03:35:04 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:35:04.425845 | orchestrator | 2025-05-14 03:35:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:35:07.477144 | orchestrator | 2025-05-14 03:35:07 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:35:07.477920 | orchestrator | 2025-05-14 03:35:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:35:10.533663 | orchestrator | 2025-05-14 03:35:10 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:35:10.533784 | orchestrator | 2025-05-14 03:35:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:35:13.580895 | orchestrator | 2025-05-14 03:35:13 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:35:13.581051 | orchestrator | 2025-05-14 03:35:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:35:16.635338 | orchestrator | 2025-05-14 03:35:16 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:35:16.635474 | orchestrator | 2025-05-14 03:35:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:35:19.676553 | orchestrator | 2025-05-14 03:35:19 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:35:19.676692 | orchestrator | 2025-05-14 03:35:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:35:22.724592 | orchestrator | 2025-05-14 03:35:22 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:35:22.724726 | orchestrator | 2025-05-14 03:35:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:35:25.776322 | orchestrator | 2025-05-14 03:35:25 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:35:25.776451 | orchestrator | 2025-05-14 03:35:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:35:28.822732 | orchestrator | 2025-05-14 03:35:28 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:35:28.822858 | orchestrator | 2025-05-14 03:35:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:35:31.869256 | orchestrator | 2025-05-14 03:35:31 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:35:31.869403 | orchestrator | 2025-05-14 03:35:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:35:34.917399 | orchestrator | 2025-05-14 03:35:34 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:35:34.917520 | orchestrator | 2025-05-14 03:35:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:35:37.966362 | orchestrator | 2025-05-14 03:35:37 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:35:37.966514 | orchestrator | 2025-05-14 03:35:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:35:41.026691 | orchestrator | 2025-05-14 03:35:41 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:35:41.026786 | orchestrator | 2025-05-14 03:35:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:35:44.072607 | orchestrator | 2025-05-14 03:35:44 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:35:44.072684 | orchestrator | 2025-05-14 03:35:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:35:47.115778 | orchestrator | 2025-05-14 03:35:47 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:35:47.115935 | orchestrator | 2025-05-14 03:35:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:35:50.155608 | orchestrator | 2025-05-14 03:35:50 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:35:50.155685 | orchestrator | 2025-05-14 03:35:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:35:53.191053 | orchestrator | 2025-05-14 03:35:53 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:35:53.191141 | orchestrator | 2025-05-14 03:35:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:35:56.237115 | orchestrator | 2025-05-14 03:35:56 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:35:56.237252 | orchestrator | 2025-05-14 03:35:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:35:59.279475 | orchestrator | 2025-05-14 03:35:59 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:35:59.279568 | orchestrator | 2025-05-14 03:35:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:36:02.326130 | orchestrator | 2025-05-14 03:36:02 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:36:02.326267 | orchestrator | 2025-05-14 03:36:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:36:05.370688 | orchestrator | 2025-05-14 03:36:05 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:36:05.370814 | orchestrator | 2025-05-14 03:36:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:36:08.417538 | orchestrator | 2025-05-14 03:36:08 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:36:08.417672 | orchestrator | 2025-05-14 03:36:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:36:11.465359 | orchestrator | 2025-05-14 03:36:11 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:36:11.465487 | orchestrator | 2025-05-14 03:36:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:36:14.516160 | orchestrator | 2025-05-14 03:36:14 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:36:14.516284 | orchestrator | 2025-05-14 03:36:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:36:17.570439 | orchestrator | 2025-05-14 03:36:17 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:36:17.570542 | orchestrator | 2025-05-14 03:36:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:36:20.624607 | orchestrator | 2025-05-14 03:36:20 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:36:20.625440 | orchestrator | 2025-05-14 03:36:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:36:23.670572 | orchestrator | 2025-05-14 03:36:23 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:36:23.670665 | orchestrator | 2025-05-14 03:36:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:36:26.720554 | orchestrator | 2025-05-14 03:36:26 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:36:26.720631 | orchestrator | 2025-05-14 03:36:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:36:29.769416 | orchestrator | 2025-05-14 03:36:29 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:36:29.769494 | orchestrator | 2025-05-14 03:36:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:36:32.818676 | orchestrator | 2025-05-14 03:36:32 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:36:32.818772 | orchestrator | 2025-05-14 03:36:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:36:35.860480 | orchestrator | 2025-05-14 03:36:35 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:36:35.860582 | orchestrator | 2025-05-14 03:36:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:36:38.907555 | orchestrator | 2025-05-14 03:36:38 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:36:38.907672 | orchestrator | 2025-05-14 03:36:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:36:41.960587 | orchestrator | 2025-05-14 03:36:41 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:36:41.960676 | orchestrator | 2025-05-14 03:36:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:36:45.018852 | orchestrator | 2025-05-14 03:36:45 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:36:45.018959 | orchestrator | 2025-05-14 03:36:45 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:36:48.064297 | orchestrator | 2025-05-14 03:36:48 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:36:48.064399 | orchestrator | 2025-05-14 03:36:48 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:36:51.104865 | orchestrator | 2025-05-14 03:36:51 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:36:51.105077 | orchestrator | 2025-05-14 03:36:51 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:36:54.159833 | orchestrator | 2025-05-14 03:36:54 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:36:54.159925 | orchestrator | 2025-05-14 03:36:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:36:57.192295 | orchestrator | 2025-05-14 03:36:57 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:36:57.192433 | orchestrator | 2025-05-14 03:36:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:37:00.236284 | orchestrator | 2025-05-14 03:37:00 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:37:00.236401 | orchestrator | 2025-05-14 03:37:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:37:03.290786 | orchestrator | 2025-05-14 03:37:03 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:37:03.290895 | orchestrator | 2025-05-14 03:37:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:37:06.344714 | orchestrator | 2025-05-14 03:37:06 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:37:06.344821 | orchestrator | 2025-05-14 03:37:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:37:09.400402 | orchestrator | 2025-05-14 03:37:09 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:37:09.400506 | orchestrator | 2025-05-14 03:37:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:37:12.450568 | orchestrator | 2025-05-14 03:37:12 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:37:12.450699 | orchestrator | 2025-05-14 03:37:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:37:15.503052 | orchestrator | 2025-05-14 03:37:15 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:37:15.503158 | orchestrator | 2025-05-14 03:37:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:37:18.551605 | orchestrator | 2025-05-14 03:37:18 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:37:18.551707 | orchestrator | 2025-05-14 03:37:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:37:21.603870 | orchestrator | 2025-05-14 03:37:21 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:37:21.604051 | orchestrator | 2025-05-14 03:37:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:37:24.647058 | orchestrator | 2025-05-14 03:37:24 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:37:24.647197 | orchestrator | 2025-05-14 03:37:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:37:27.694476 | orchestrator | 2025-05-14 03:37:27 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:37:27.694581 | orchestrator | 2025-05-14 03:37:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:37:30.749175 | orchestrator | 2025-05-14 03:37:30 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:37:30.749311 | orchestrator | 2025-05-14 03:37:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:37:33.802239 | orchestrator | 2025-05-14 03:37:33 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:37:33.802329 | orchestrator | 2025-05-14 03:37:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:37:36.847508 | orchestrator | 2025-05-14 03:37:36 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:37:37.014072 | orchestrator | 2025-05-14 03:37:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:37:39.898426 | orchestrator | 2025-05-14 03:37:39 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:37:39.898530 | orchestrator | 2025-05-14 03:37:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:37:42.948589 | orchestrator | 2025-05-14 03:37:42 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:37:42.948728 | orchestrator | 2025-05-14 03:37:42 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:37:46.000478 | orchestrator | 2025-05-14 03:37:46 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:37:46.000611 | orchestrator | 2025-05-14 03:37:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:37:49.050257 | orchestrator | 2025-05-14 03:37:49 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:37:49.050395 | orchestrator | 2025-05-14 03:37:49 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:37:52.086255 | orchestrator | 2025-05-14 03:37:52 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:37:52.086359 | orchestrator | 2025-05-14 03:37:52 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:37:55.135377 | orchestrator | 2025-05-14 03:37:55 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:37:55.135508 | orchestrator | 2025-05-14 03:37:55 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:37:58.184864 | orchestrator | 2025-05-14 03:37:58 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:37:58.185020 | orchestrator | 2025-05-14 03:37:58 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:38:01.233328 | orchestrator | 2025-05-14 03:38:01 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:38:01.233435 | orchestrator | 2025-05-14 03:38:01 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:38:04.290271 | orchestrator | 2025-05-14 03:38:04 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:38:04.290346 | orchestrator | 2025-05-14 03:38:04 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:38:07.336019 | orchestrator | 2025-05-14 03:38:07 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:38:07.336162 | orchestrator | 2025-05-14 03:38:07 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:38:10.376193 | orchestrator | 2025-05-14 03:38:10 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:38:10.376272 | orchestrator | 2025-05-14 03:38:10 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:38:13.431909 | orchestrator | 2025-05-14 03:38:13 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:38:13.432084 | orchestrator | 2025-05-14 03:38:13 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:38:16.491001 | orchestrator | 2025-05-14 03:38:16 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:38:16.491107 | orchestrator | 2025-05-14 03:38:16 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:38:19.542400 | orchestrator | 2025-05-14 03:38:19 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:38:19.542511 | orchestrator | 2025-05-14 03:38:19 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:38:22.589407 | orchestrator | 2025-05-14 03:38:22 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:38:22.589509 | orchestrator | 2025-05-14 03:38:22 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:38:25.630715 | orchestrator | 2025-05-14 03:38:25 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:38:25.630872 | orchestrator | 2025-05-14 03:38:25 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:38:28.679078 | orchestrator | 2025-05-14 03:38:28 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:38:28.679199 | orchestrator | 2025-05-14 03:38:28 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:38:31.730910 | orchestrator | 2025-05-14 03:38:31 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:38:31.731130 | orchestrator | 2025-05-14 03:38:31 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:38:34.786093 | orchestrator | 2025-05-14 03:38:34 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:38:34.786195 | orchestrator | 2025-05-14 03:38:34 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:38:37.834190 | orchestrator | 2025-05-14 03:38:37 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:38:37.834294 | orchestrator | 2025-05-14 03:38:37 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:38:40.878740 | orchestrator | 2025-05-14 03:38:40 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:38:40.878856 | orchestrator | 2025-05-14 03:38:40 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:38:43.926580 | orchestrator | 2025-05-14 03:38:43 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:38:43.926677 | orchestrator | 2025-05-14 03:38:43 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:38:46.971423 | orchestrator | 2025-05-14 03:38:46 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:38:46.971523 | orchestrator | 2025-05-14 03:38:46 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:38:50.026297 | orchestrator | 2025-05-14 03:38:50 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:38:50.026406 | orchestrator | 2025-05-14 03:38:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:38:53.066361 | orchestrator | 2025-05-14 03:38:53 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:38:53.066470 | orchestrator | 2025-05-14 03:38:53 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:38:56.101378 | orchestrator | 2025-05-14 03:38:56 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:38:56.101449 | orchestrator | 2025-05-14 03:38:56 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:38:59.147815 | orchestrator | 2025-05-14 03:38:59 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:38:59.147946 | orchestrator | 2025-05-14 03:38:59 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:39:02.193682 | orchestrator | 2025-05-14 03:39:02 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:39:02.193806 | orchestrator | 2025-05-14 03:39:02 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:39:05.243126 | orchestrator | 2025-05-14 03:39:05 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:39:05.243216 | orchestrator | 2025-05-14 03:39:05 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:39:08.291833 | orchestrator | 2025-05-14 03:39:08 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:39:08.291929 | orchestrator | 2025-05-14 03:39:08 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:39:11.334714 | orchestrator | 2025-05-14 03:39:11 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:39:11.334808 | orchestrator | 2025-05-14 03:39:11 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:39:14.387436 | orchestrator | 2025-05-14 03:39:14 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:39:14.387524 | orchestrator | 2025-05-14 03:39:14 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:39:17.428529 | orchestrator | 2025-05-14 03:39:17 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:39:17.428630 | orchestrator | 2025-05-14 03:39:17 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:39:20.471728 | orchestrator | 2025-05-14 03:39:20 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:39:20.471800 | orchestrator | 2025-05-14 03:39:20 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:39:23.518548 | orchestrator | 2025-05-14 03:39:23 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:39:23.518633 | orchestrator | 2025-05-14 03:39:23 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:39:26.572029 | orchestrator | 2025-05-14 03:39:26 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:39:26.572101 | orchestrator | 2025-05-14 03:39:26 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:39:29.621845 | orchestrator | 2025-05-14 03:39:29 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:39:29.621933 | orchestrator | 2025-05-14 03:39:29 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:39:32.678951 | orchestrator | 2025-05-14 03:39:32 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:39:32.679054 | orchestrator | 2025-05-14 03:39:32 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:39:35.724372 | orchestrator | 2025-05-14 03:39:35 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:39:35.724484 | orchestrator | 2025-05-14 03:39:35 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:39:38.768936 | orchestrator | 2025-05-14 03:39:38 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:39:38.769081 | orchestrator | 2025-05-14 03:39:38 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:39:41.817361 | orchestrator | 2025-05-14 03:39:41 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:39:41.818242 | orchestrator | 2025-05-14 03:39:41 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:39:44.863461 | orchestrator | 2025-05-14 03:39:44 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:39:44.863576 | orchestrator | 2025-05-14 03:39:44 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:39:47.915138 | orchestrator | 2025-05-14 03:39:47 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:39:47.915242 | orchestrator | 2025-05-14 03:39:47 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:39:50.962316 | orchestrator | 2025-05-14 03:39:50 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:39:50.962450 | orchestrator | 2025-05-14 03:39:50 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:39:54.013284 | orchestrator | 2025-05-14 03:39:54 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:39:54.013393 | orchestrator | 2025-05-14 03:39:54 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:39:57.048008 | orchestrator | 2025-05-14 03:39:57 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:39:57.048351 | orchestrator | 2025-05-14 03:39:57 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:40:00.094408 | orchestrator | 2025-05-14 03:40:00 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:40:00.094507 | orchestrator | 2025-05-14 03:40:00 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:40:03.137161 | orchestrator | 2025-05-14 03:40:03 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:40:03.137271 | orchestrator | 2025-05-14 03:40:03 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:40:06.185561 | orchestrator | 2025-05-14 03:40:06 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:40:06.185697 | orchestrator | 2025-05-14 03:40:06 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:40:09.230189 | orchestrator | 2025-05-14 03:40:09 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:40:09.230312 | orchestrator | 2025-05-14 03:40:09 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:40:12.265714 | orchestrator | 2025-05-14 03:40:12 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:40:12.265803 | orchestrator | 2025-05-14 03:40:12 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:40:15.319960 | orchestrator | 2025-05-14 03:40:15 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:40:15.320063 | orchestrator | 2025-05-14 03:40:15 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:40:18.371140 | orchestrator | 2025-05-14 03:40:18 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:40:18.371275 | orchestrator | 2025-05-14 03:40:18 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:40:21.421300 | orchestrator | 2025-05-14 03:40:21 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:40:21.421405 | orchestrator | 2025-05-14 03:40:21 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:40:24.472303 | orchestrator | 2025-05-14 03:40:24 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:40:24.472428 | orchestrator | 2025-05-14 03:40:24 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:40:27.510289 | orchestrator | 2025-05-14 03:40:27 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:40:27.510383 | orchestrator | 2025-05-14 03:40:27 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:40:30.562416 | orchestrator | 2025-05-14 03:40:30 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:40:30.562471 | orchestrator | 2025-05-14 03:40:30 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:40:33.601944 | orchestrator | 2025-05-14 03:40:33 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:40:33.602170 | orchestrator | 2025-05-14 03:40:33 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:40:36.654916 | orchestrator | 2025-05-14 03:40:36 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:40:36.779560 | orchestrator | 2025-05-14 03:40:36 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:40:39.705502 | orchestrator | 2025-05-14 03:40:39 | INFO  | Task d82f8ed9-5664-4bc4-a3e9-26e1a4e29521 is in state STARTED 2025-05-14 03:40:39.705615 | orchestrator | 2025-05-14 03:40:39 | INFO  | Wait 1 second(s) until the next check 2025-05-14 03:40:41.072399 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-05-14 03:40:41.074255 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-14 03:40:41.824811 | 2025-05-14 03:40:41.824988 | PLAY [Post output play] 2025-05-14 03:40:41.843262 | 2025-05-14 03:40:41.843448 | LOOP [stage-output : Register sources] 2025-05-14 03:40:41.904019 | 2025-05-14 03:40:41.904377 | TASK [stage-output : Check sudo] 2025-05-14 03:40:42.760822 | orchestrator | sudo: a password is required 2025-05-14 03:40:42.945232 | orchestrator | ok: Runtime: 0:00:00.013252 2025-05-14 03:40:42.962508 | 2025-05-14 03:40:42.962714 | LOOP [stage-output : Set source and destination for files and folders] 2025-05-14 03:40:42.998021 | 2025-05-14 03:40:42.998357 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-05-14 03:40:43.065830 | orchestrator | ok 2025-05-14 03:40:43.075839 | 2025-05-14 03:40:43.075994 | LOOP [stage-output : Ensure target folders exist] 2025-05-14 03:40:43.544734 | orchestrator | ok: "docs" 2025-05-14 03:40:43.545074 | 2025-05-14 03:40:43.805157 | orchestrator | ok: "artifacts" 2025-05-14 03:40:44.089247 | orchestrator | ok: "logs" 2025-05-14 03:40:44.110097 | 2025-05-14 03:40:44.110447 | LOOP [stage-output : Copy files and folders to staging folder] 2025-05-14 03:40:44.144064 | 2025-05-14 03:40:44.144406 | TASK [stage-output : Make all log files readable] 2025-05-14 03:40:44.425370 | orchestrator | ok 2025-05-14 03:40:44.434952 | 2025-05-14 03:40:44.435101 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-05-14 03:40:44.479848 | orchestrator | skipping: Conditional result was False 2025-05-14 03:40:44.490352 | 2025-05-14 03:40:44.490489 | TASK [stage-output : Discover log files for compression] 2025-05-14 03:40:44.514346 | orchestrator | skipping: Conditional result was False 2025-05-14 03:40:44.524123 | 2025-05-14 03:40:44.524251 | LOOP [stage-output : Archive everything from logs] 2025-05-14 03:40:44.570973 | 2025-05-14 03:40:44.571151 | PLAY [Post cleanup play] 2025-05-14 03:40:44.579470 | 2025-05-14 03:40:44.579581 | TASK [Set cloud fact (Zuul deployment)] 2025-05-14 03:40:44.637153 | orchestrator | ok 2025-05-14 03:40:44.650264 | 2025-05-14 03:40:44.650456 | TASK [Set cloud fact (local deployment)] 2025-05-14 03:40:44.685699 | orchestrator | skipping: Conditional result was False 2025-05-14 03:40:44.699208 | 2025-05-14 03:40:44.699408 | TASK [Clean the cloud environment] 2025-05-14 03:40:45.580937 | orchestrator | 2025-05-14 03:40:45 - clean up servers 2025-05-14 03:40:46.417065 | orchestrator | 2025-05-14 03:40:46 - testbed-manager 2025-05-14 03:40:46.512741 | orchestrator | 2025-05-14 03:40:46 - testbed-node-0 2025-05-14 03:40:46.602691 | orchestrator | 2025-05-14 03:40:46 - testbed-node-3 2025-05-14 03:40:46.715500 | orchestrator | 2025-05-14 03:40:46 - testbed-node-1 2025-05-14 03:40:46.812955 | orchestrator | 2025-05-14 03:40:46 - testbed-node-5 2025-05-14 03:40:46.908903 | orchestrator | 2025-05-14 03:40:46 - testbed-node-2 2025-05-14 03:40:47.014142 | orchestrator | 2025-05-14 03:40:47 - testbed-node-4 2025-05-14 03:40:47.121259 | orchestrator | 2025-05-14 03:40:47 - clean up keypairs 2025-05-14 03:40:47.142721 | orchestrator | 2025-05-14 03:40:47 - testbed 2025-05-14 03:40:47.173091 | orchestrator | 2025-05-14 03:40:47 - wait for servers to be gone 2025-05-14 03:40:58.529077 | orchestrator | 2025-05-14 03:40:58 - clean up ports 2025-05-14 03:40:58.744651 | orchestrator | 2025-05-14 03:40:58 - 1f344957-c427-4527-9efc-9e5ce672ef0f 2025-05-14 03:40:59.091403 | orchestrator | 2025-05-14 03:40:59 - 4408801d-4256-4117-a3b5-7a5adcffbffc 2025-05-14 03:40:59.274984 | orchestrator | 2025-05-14 03:40:59 - 60f3b54f-a32e-4492-a75a-1914962b8131 2025-05-14 03:40:59.499273 | orchestrator | 2025-05-14 03:40:59 - 611b4bcc-2637-4040-bbb8-f9c529fcf472 2025-05-14 03:40:59.703486 | orchestrator | 2025-05-14 03:40:59 - 77f43765-e02c-4de1-a62d-e4bfaaaa8a1f 2025-05-14 03:40:59.893174 | orchestrator | 2025-05-14 03:40:59 - 91e2f2b9-9625-4ca2-8e22-fd0d2d5fcc9c 2025-05-14 03:41:00.090551 | orchestrator | 2025-05-14 03:41:00 - f7983bd3-f70e-4641-aaab-9ff15f987030 2025-05-14 03:41:00.283075 | orchestrator | 2025-05-14 03:41:00 - clean up volumes 2025-05-14 03:41:00.424321 | orchestrator | 2025-05-14 03:41:00 - testbed-volume-4-node-base 2025-05-14 03:41:00.458988 | orchestrator | 2025-05-14 03:41:00 - testbed-volume-3-node-base 2025-05-14 03:41:00.499022 | orchestrator | 2025-05-14 03:41:00 - testbed-volume-5-node-base 2025-05-14 03:41:00.541105 | orchestrator | 2025-05-14 03:41:00 - testbed-volume-0-node-base 2025-05-14 03:41:00.580215 | orchestrator | 2025-05-14 03:41:00 - testbed-volume-2-node-base 2025-05-14 03:41:00.624160 | orchestrator | 2025-05-14 03:41:00 - testbed-volume-1-node-base 2025-05-14 03:41:00.670333 | orchestrator | 2025-05-14 03:41:00 - testbed-volume-manager-base 2025-05-14 03:41:00.712988 | orchestrator | 2025-05-14 03:41:00 - testbed-volume-6-node-3 2025-05-14 03:41:00.755764 | orchestrator | 2025-05-14 03:41:00 - testbed-volume-2-node-5 2025-05-14 03:41:00.796867 | orchestrator | 2025-05-14 03:41:00 - testbed-volume-5-node-5 2025-05-14 03:41:00.838413 | orchestrator | 2025-05-14 03:41:00 - testbed-volume-4-node-4 2025-05-14 03:41:00.878336 | orchestrator | 2025-05-14 03:41:00 - testbed-volume-1-node-4 2025-05-14 03:41:00.919938 | orchestrator | 2025-05-14 03:41:00 - testbed-volume-0-node-3 2025-05-14 03:41:00.962770 | orchestrator | 2025-05-14 03:41:00 - testbed-volume-7-node-4 2025-05-14 03:41:01.004708 | orchestrator | 2025-05-14 03:41:01 - testbed-volume-3-node-3 2025-05-14 03:41:01.042679 | orchestrator | 2025-05-14 03:41:01 - testbed-volume-8-node-5 2025-05-14 03:41:01.082151 | orchestrator | 2025-05-14 03:41:01 - disconnect routers 2025-05-14 03:41:01.181171 | orchestrator | 2025-05-14 03:41:01 - testbed 2025-05-14 03:41:01.798638 | orchestrator | 2025-05-14 03:41:01 - clean up subnets 2025-05-14 03:41:01.836341 | orchestrator | 2025-05-14 03:41:01 - subnet-testbed-management 2025-05-14 03:41:01.996071 | orchestrator | 2025-05-14 03:41:01 - clean up networks 2025-05-14 03:41:02.150871 | orchestrator | 2025-05-14 03:41:02 - net-testbed-management 2025-05-14 03:41:02.396398 | orchestrator | 2025-05-14 03:41:02 - clean up security groups 2025-05-14 03:41:02.428615 | orchestrator | 2025-05-14 03:41:02 - testbed-management 2025-05-14 03:41:02.512957 | orchestrator | 2025-05-14 03:41:02 - testbed-node 2025-05-14 03:41:02.594570 | orchestrator | 2025-05-14 03:41:02 - clean up floating ips 2025-05-14 03:41:02.624078 | orchestrator | 2025-05-14 03:41:02 - 81.163.192.80 2025-05-14 03:41:02.992990 | orchestrator | 2025-05-14 03:41:02 - clean up routers 2025-05-14 03:41:03.042457 | orchestrator | 2025-05-14 03:41:03 - testbed 2025-05-14 03:41:04.246272 | orchestrator | ok: Runtime: 0:00:18.749590 2025-05-14 03:41:04.250897 | 2025-05-14 03:41:04.251066 | PLAY RECAP 2025-05-14 03:41:04.251186 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-05-14 03:41:04.251248 | 2025-05-14 03:41:04.395417 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-14 03:41:04.397732 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-05-14 03:41:05.170516 | 2025-05-14 03:41:05.170689 | PLAY [Cleanup play] 2025-05-14 03:41:05.187558 | 2025-05-14 03:41:05.187699 | TASK [Set cloud fact (Zuul deployment)] 2025-05-14 03:41:05.244564 | orchestrator | ok 2025-05-14 03:41:05.254020 | 2025-05-14 03:41:05.254172 | TASK [Set cloud fact (local deployment)] 2025-05-14 03:41:05.288602 | orchestrator | skipping: Conditional result was False 2025-05-14 03:41:05.305594 | 2025-05-14 03:41:05.305742 | TASK [Clean the cloud environment] 2025-05-14 03:41:06.467896 | orchestrator | 2025-05-14 03:41:06 - clean up servers 2025-05-14 03:41:06.987112 | orchestrator | 2025-05-14 03:41:06 - clean up keypairs 2025-05-14 03:41:07.003407 | orchestrator | 2025-05-14 03:41:07 - wait for servers to be gone 2025-05-14 03:41:07.087876 | orchestrator | 2025-05-14 03:41:07 - clean up ports 2025-05-14 03:41:07.160142 | orchestrator | 2025-05-14 03:41:07 - clean up volumes 2025-05-14 03:41:07.249678 | orchestrator | 2025-05-14 03:41:07 - disconnect routers 2025-05-14 03:41:07.271874 | orchestrator | 2025-05-14 03:41:07 - clean up subnets 2025-05-14 03:41:07.290089 | orchestrator | 2025-05-14 03:41:07 - clean up networks 2025-05-14 03:41:07.438786 | orchestrator | 2025-05-14 03:41:07 - clean up security groups 2025-05-14 03:41:07.460290 | orchestrator | 2025-05-14 03:41:07 - clean up floating ips 2025-05-14 03:41:07.486406 | orchestrator | 2025-05-14 03:41:07 - clean up routers 2025-05-14 03:41:07.850770 | orchestrator | ok: Runtime: 0:00:01.441689 2025-05-14 03:41:07.854693 | 2025-05-14 03:41:07.854904 | PLAY RECAP 2025-05-14 03:41:07.855042 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-05-14 03:41:07.855108 | 2025-05-14 03:41:07.981717 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-05-14 03:41:07.984166 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-14 03:41:08.727226 | 2025-05-14 03:41:08.727397 | PLAY [Base post-fetch] 2025-05-14 03:41:08.742877 | 2025-05-14 03:41:08.743016 | TASK [fetch-output : Set log path for multiple nodes] 2025-05-14 03:41:08.798793 | orchestrator | skipping: Conditional result was False 2025-05-14 03:41:08.809234 | 2025-05-14 03:41:08.809424 | TASK [fetch-output : Set log path for single node] 2025-05-14 03:41:08.860775 | orchestrator | ok 2025-05-14 03:41:08.868968 | 2025-05-14 03:41:08.869111 | LOOP [fetch-output : Ensure local output dirs] 2025-05-14 03:41:09.376170 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/7d8a8fd7f89a456f89cad5df4058c4c4/work/logs" 2025-05-14 03:41:09.649488 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/7d8a8fd7f89a456f89cad5df4058c4c4/work/artifacts" 2025-05-14 03:41:09.932242 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/7d8a8fd7f89a456f89cad5df4058c4c4/work/docs" 2025-05-14 03:41:09.960476 | 2025-05-14 03:41:09.960648 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-05-14 03:41:10.916199 | orchestrator | changed: .d..t...... ./ 2025-05-14 03:41:10.916593 | orchestrator | changed: All items complete 2025-05-14 03:41:10.916653 | 2025-05-14 03:41:11.614616 | orchestrator | changed: .d..t...... ./ 2025-05-14 03:41:12.350178 | orchestrator | changed: .d..t...... ./ 2025-05-14 03:41:12.378051 | 2025-05-14 03:41:12.378193 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-05-14 03:41:12.418464 | orchestrator | skipping: Conditional result was False 2025-05-14 03:41:12.421780 | orchestrator | skipping: Conditional result was False 2025-05-14 03:41:12.441683 | 2025-05-14 03:41:12.441831 | PLAY RECAP 2025-05-14 03:41:12.441942 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-05-14 03:41:12.442003 | 2025-05-14 03:41:12.570044 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-14 03:41:12.572540 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-14 03:41:13.342135 | 2025-05-14 03:41:13.342288 | PLAY [Base post] 2025-05-14 03:41:13.356791 | 2025-05-14 03:41:13.356928 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-05-14 03:41:14.328822 | orchestrator | changed 2025-05-14 03:41:14.339678 | 2025-05-14 03:41:14.339804 | PLAY RECAP 2025-05-14 03:41:14.339882 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-05-14 03:41:14.339961 | 2025-05-14 03:41:14.456060 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-14 03:41:14.457066 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-05-14 03:41:15.270248 | 2025-05-14 03:41:15.270446 | PLAY [Base post-logs] 2025-05-14 03:41:15.281167 | 2025-05-14 03:41:15.281307 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-05-14 03:41:15.737294 | localhost | changed 2025-05-14 03:41:15.756122 | 2025-05-14 03:41:15.756388 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-05-14 03:41:15.805918 | localhost | ok 2025-05-14 03:41:15.812285 | 2025-05-14 03:41:15.812473 | TASK [Set zuul-log-path fact] 2025-05-14 03:41:15.831611 | localhost | ok 2025-05-14 03:41:15.845757 | 2025-05-14 03:41:15.845937 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-14 03:41:15.873880 | localhost | ok 2025-05-14 03:41:15.879454 | 2025-05-14 03:41:15.879606 | TASK [upload-logs : Create log directories] 2025-05-14 03:41:16.382273 | localhost | changed 2025-05-14 03:41:16.388109 | 2025-05-14 03:41:16.388297 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-05-14 03:41:16.912435 | localhost -> localhost | ok: Runtime: 0:00:00.007268 2025-05-14 03:41:16.917261 | 2025-05-14 03:41:16.917444 | TASK [upload-logs : Upload logs to log server] 2025-05-14 03:41:17.506475 | localhost | Output suppressed because no_log was given 2025-05-14 03:41:17.510467 | 2025-05-14 03:41:17.510658 | LOOP [upload-logs : Compress console log and json output] 2025-05-14 03:41:17.575447 | localhost | skipping: Conditional result was False 2025-05-14 03:41:17.581524 | localhost | skipping: Conditional result was False 2025-05-14 03:41:17.589625 | 2025-05-14 03:41:17.589737 | LOOP [upload-logs : Upload compressed console log and json output] 2025-05-14 03:41:17.633158 | localhost | skipping: Conditional result was False 2025-05-14 03:41:17.633455 | 2025-05-14 03:41:17.637907 | localhost | skipping: Conditional result was False 2025-05-14 03:41:17.646564 | 2025-05-14 03:41:17.646683 | LOOP [upload-logs : Upload console log and json output]