2025-03-26 15:05:17.580545 | Job console starting... 2025-03-26 15:05:17.589591 | Updating repositories 2025-03-26 15:05:17.678300 | Preparing job workspace 2025-03-26 15:05:19.490517 | Running Ansible setup... 2025-03-26 15:05:24.675871 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-03-26 15:05:25.432927 | 2025-03-26 15:05:25.433084 | PLAY [Base pre] 2025-03-26 15:05:25.464222 | 2025-03-26 15:05:25.464347 | TASK [Setup log path fact] 2025-03-26 15:05:25.489017 | orchestrator | ok 2025-03-26 15:05:25.509147 | 2025-03-26 15:05:25.509271 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-03-26 15:05:25.561623 | orchestrator | ok 2025-03-26 15:05:25.579742 | 2025-03-26 15:05:25.579847 | TASK [emit-job-header : Print job information] 2025-03-26 15:05:25.651953 | # Job Information 2025-03-26 15:05:25.652186 | Ansible Version: 2.15.3 2025-03-26 15:05:25.652242 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-03-26 15:05:25.652293 | Pipeline: post 2025-03-26 15:05:25.652330 | Executor: 7d211f194f6a 2025-03-26 15:05:25.652362 | Triggered by: https://github.com/osism/testbed/commit/7d162bd40c21a42aaaf6c8ad9d259bed7c67afc5 2025-03-26 15:05:25.652435 | Event ID: cfa77ae8-0a42-11f0-9d2b-ac7427b70d72 2025-03-26 15:05:25.663361 | 2025-03-26 15:05:25.663504 | LOOP [emit-job-header : Print node information] 2025-03-26 15:05:25.841734 | orchestrator | ok: 2025-03-26 15:05:25.841981 | orchestrator | # Node Information 2025-03-26 15:05:25.842029 | orchestrator | Inventory Hostname: orchestrator 2025-03-26 15:05:25.842064 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-03-26 15:05:25.842095 | orchestrator | Username: zuul-testbed02 2025-03-26 15:05:25.842123 | orchestrator | Distro: Debian 12.10 2025-03-26 15:05:25.842156 | orchestrator | Provider: static-testbed 2025-03-26 15:05:25.842183 | orchestrator | Label: testbed-orchestrator 2025-03-26 15:05:25.842212 | orchestrator | Product Name: OpenStack Nova 2025-03-26 15:05:25.842240 | orchestrator | Interface IP: 81.163.193.140 2025-03-26 15:05:25.875335 | 2025-03-26 15:05:25.875466 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-03-26 15:05:26.368452 | orchestrator -> localhost | changed 2025-03-26 15:05:26.388592 | 2025-03-26 15:05:26.388753 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-03-26 15:05:27.489376 | orchestrator -> localhost | changed 2025-03-26 15:05:27.515160 | 2025-03-26 15:05:27.515289 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-03-26 15:05:27.817247 | orchestrator -> localhost | ok 2025-03-26 15:05:27.835310 | 2025-03-26 15:05:27.835558 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-03-26 15:05:27.892940 | orchestrator | ok 2025-03-26 15:05:27.913952 | orchestrator | included: /var/lib/zuul/builds/dc1fbcec8af84866bbbe7ca0e0139f55/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-03-26 15:05:27.924010 | 2025-03-26 15:05:27.924113 | TASK [add-build-sshkey : Create Temp SSH key] 2025-03-26 15:05:28.559156 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-03-26 15:05:28.559449 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/dc1fbcec8af84866bbbe7ca0e0139f55/work/dc1fbcec8af84866bbbe7ca0e0139f55_id_rsa 2025-03-26 15:05:28.559525 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/dc1fbcec8af84866bbbe7ca0e0139f55/work/dc1fbcec8af84866bbbe7ca0e0139f55_id_rsa.pub 2025-03-26 15:05:28.559562 | orchestrator -> localhost | The key fingerprint is: 2025-03-26 15:05:28.559594 | orchestrator -> localhost | SHA256:cabtAso57vYPww02OWuYiSI+YXLq7BxuuZiKa2Gbftg zuul-build-sshkey 2025-03-26 15:05:28.559623 | orchestrator -> localhost | The key's randomart image is: 2025-03-26 15:05:28.559651 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-03-26 15:05:28.559680 | orchestrator -> localhost | | | 2025-03-26 15:05:28.559709 | orchestrator -> localhost | | | 2025-03-26 15:05:28.559753 | orchestrator -> localhost | | . o | 2025-03-26 15:05:28.559782 | orchestrator -> localhost | | . * | 2025-03-26 15:05:28.559809 | orchestrator -> localhost | | B S . | 2025-03-26 15:05:28.559836 | orchestrator -> localhost | |.=.o O B . | 2025-03-26 15:05:28.559868 | orchestrator -> localhost | |*+X O * o . | 2025-03-26 15:05:28.559895 | orchestrator -> localhost | |X@.E.o o . | 2025-03-26 15:05:28.559922 | orchestrator -> localhost | |&@+oo.... | 2025-03-26 15:05:28.559950 | orchestrator -> localhost | +----[SHA256]-----+ 2025-03-26 15:05:28.560019 | orchestrator -> localhost | ok: Runtime: 0:00:00.121628 2025-03-26 15:05:28.571531 | 2025-03-26 15:05:28.571670 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-03-26 15:05:28.606881 | orchestrator | ok 2025-03-26 15:05:28.621034 | orchestrator | included: /var/lib/zuul/builds/dc1fbcec8af84866bbbe7ca0e0139f55/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-03-26 15:05:28.632238 | 2025-03-26 15:05:28.632348 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-03-26 15:05:28.666957 | orchestrator | skipping: Conditional result was False 2025-03-26 15:05:28.678530 | 2025-03-26 15:05:28.678648 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-03-26 15:05:29.235108 | orchestrator | changed 2025-03-26 15:05:29.245735 | 2025-03-26 15:05:29.245862 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-03-26 15:05:29.526965 | orchestrator | ok 2025-03-26 15:05:29.536197 | 2025-03-26 15:05:29.536326 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-03-26 15:05:29.886061 | orchestrator | ok 2025-03-26 15:05:29.894255 | 2025-03-26 15:05:29.894370 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-03-26 15:05:30.280172 | orchestrator | ok 2025-03-26 15:05:30.291188 | 2025-03-26 15:05:30.291315 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-03-26 15:05:30.327223 | orchestrator | skipping: Conditional result was False 2025-03-26 15:05:30.370561 | 2025-03-26 15:05:30.370680 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-03-26 15:05:30.785103 | orchestrator -> localhost | changed 2025-03-26 15:05:30.805977 | 2025-03-26 15:05:30.806107 | TASK [add-build-sshkey : Add back temp key] 2025-03-26 15:05:31.156221 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/dc1fbcec8af84866bbbe7ca0e0139f55/work/dc1fbcec8af84866bbbe7ca0e0139f55_id_rsa (zuul-build-sshkey) 2025-03-26 15:05:31.156540 | orchestrator -> localhost | ok: Runtime: 0:00:00.012505 2025-03-26 15:05:31.168154 | 2025-03-26 15:05:31.168287 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-03-26 15:05:31.585228 | orchestrator | ok 2025-03-26 15:05:31.593023 | 2025-03-26 15:05:31.593139 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-03-26 15:05:31.628049 | orchestrator | skipping: Conditional result was False 2025-03-26 15:05:31.644591 | 2025-03-26 15:05:31.644697 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-03-26 15:05:32.051679 | orchestrator | ok 2025-03-26 15:05:32.071817 | 2025-03-26 15:05:32.071932 | TASK [validate-host : Define zuul_info_dir fact] 2025-03-26 15:05:32.117409 | orchestrator | ok 2025-03-26 15:05:32.126662 | 2025-03-26 15:05:32.126771 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-03-26 15:05:32.448462 | orchestrator -> localhost | ok 2025-03-26 15:05:32.467040 | 2025-03-26 15:05:32.467190 | TASK [validate-host : Collect information about the host] 2025-03-26 15:05:33.723835 | orchestrator | ok 2025-03-26 15:05:33.740855 | 2025-03-26 15:05:33.740976 | TASK [validate-host : Sanitize hostname] 2025-03-26 15:05:33.822946 | orchestrator | ok 2025-03-26 15:05:33.832705 | 2025-03-26 15:05:33.832831 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-03-26 15:05:34.413387 | orchestrator -> localhost | changed 2025-03-26 15:05:34.427394 | 2025-03-26 15:05:34.427584 | TASK [validate-host : Collect information about zuul worker] 2025-03-26 15:05:34.933767 | orchestrator | ok 2025-03-26 15:05:34.942438 | 2025-03-26 15:05:34.942604 | TASK [validate-host : Write out all zuul information for each host] 2025-03-26 15:05:35.540796 | orchestrator -> localhost | changed 2025-03-26 15:05:35.563124 | 2025-03-26 15:05:35.563259 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-03-26 15:05:35.855028 | orchestrator | ok 2025-03-26 15:05:35.865333 | 2025-03-26 15:05:35.865456 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-03-26 15:06:01.412653 | orchestrator | changed: 2025-03-26 15:06:01.412875 | orchestrator | .d..t...... src/ 2025-03-26 15:06:01.412912 | orchestrator | .d..t...... src/github.com/ 2025-03-26 15:06:01.412937 | orchestrator | .d..t...... src/github.com/osism/ 2025-03-26 15:06:01.412958 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-03-26 15:06:01.412978 | orchestrator | RedHat.yml 2025-03-26 15:06:01.428619 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-03-26 15:06:01.428638 | orchestrator | RedHat.yml 2025-03-26 15:06:01.428693 | orchestrator | = 1.53.0"... 2025-03-26 15:06:13.207680 | orchestrator | 15:06:13.207 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-03-26 15:06:13.286425 | orchestrator | 15:06:13.286 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-03-26 15:06:14.534409 | orchestrator | 15:06:14.534 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.0.0... 2025-03-26 15:06:15.744879 | orchestrator | 15:06:15.744 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.0.0 (signed, key ID 4F80527A391BEFD2) 2025-03-26 15:06:17.077035 | orchestrator | 15:06:17.076 STDOUT terraform: - Installing hashicorp/local v2.5.2... 2025-03-26 15:06:18.080378 | orchestrator | 15:06:18.080 STDOUT terraform: - Installed hashicorp/local v2.5.2 (signed, key ID 0C0AF313E5FD9F80) 2025-03-26 15:06:18.968879 | orchestrator | 15:06:18.968 STDOUT terraform: - Installing hashicorp/null v3.2.3... 2025-03-26 15:06:19.758795 | orchestrator | 15:06:19.758 STDOUT terraform: - Installed hashicorp/null v3.2.3 (signed, key ID 0C0AF313E5FD9F80) 2025-03-26 15:06:19.758845 | orchestrator | 15:06:19.758 STDOUT terraform: Providers are signed by their developers. 2025-03-26 15:06:19.759623 | orchestrator | 15:06:19.758 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-03-26 15:06:19.759714 | orchestrator | 15:06:19.758 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-03-26 15:06:19.759736 | orchestrator | 15:06:19.758 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-03-26 15:06:19.759752 | orchestrator | 15:06:19.759 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-03-26 15:06:19.759768 | orchestrator | 15:06:19.759 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-03-26 15:06:19.759783 | orchestrator | 15:06:19.759 STDOUT terraform: you run "tofu init" in the future. 2025-03-26 15:06:19.759798 | orchestrator | 15:06:19.759 STDOUT terraform: OpenTofu has been successfully initialized! 2025-03-26 15:06:19.759818 | orchestrator | 15:06:19.759 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-03-26 15:06:19.909775 | orchestrator | 15:06:19.759 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-03-26 15:06:19.909879 | orchestrator | 15:06:19.759 STDOUT terraform: should now work. 2025-03-26 15:06:19.909895 | orchestrator | 15:06:19.759 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-03-26 15:06:19.909904 | orchestrator | 15:06:19.759 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-03-26 15:06:19.909912 | orchestrator | 15:06:19.759 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-03-26 15:06:19.909979 | orchestrator | 15:06:19.909 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-03-26 15:06:20.084750 | orchestrator | 15:06:20.084 STDOUT terraform: Created and switched to workspace "ci"! 2025-03-26 15:06:20.084838 | orchestrator | 15:06:20.084 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-03-26 15:06:20.084858 | orchestrator | 15:06:20.084 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-03-26 15:06:20.270273 | orchestrator | 15:06:20.084 STDOUT terraform: for this configuration. 2025-03-26 15:06:20.270407 | orchestrator | 15:06:20.270 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-03-26 15:06:20.364364 | orchestrator | 15:06:20.364 STDOUT terraform: ci.auto.tfvars 2025-03-26 15:06:20.542747 | orchestrator | 15:06:20.542 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-03-26 15:06:21.497673 | orchestrator | 15:06:21.497 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-03-26 15:06:22.005246 | orchestrator | 15:06:22.004 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-03-26 15:06:22.193953 | orchestrator | 15:06:22.193 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-03-26 15:06:22.194010 | orchestrator | 15:06:22.193 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-03-26 15:06:22.194064 | orchestrator | 15:06:22.193 STDOUT terraform:  + create 2025-03-26 15:06:22.194078 | orchestrator | 15:06:22.194 STDOUT terraform:  <= read (data resources) 2025-03-26 15:06:22.194139 | orchestrator | 15:06:22.194 STDOUT terraform: OpenTofu will perform the following actions: 2025-03-26 15:06:22.194280 | orchestrator | 15:06:22.194 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-03-26 15:06:22.194317 | orchestrator | 15:06:22.194 STDOUT terraform:  # (config refers to values not yet known) 2025-03-26 15:06:22.194348 | orchestrator | 15:06:22.194 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-03-26 15:06:22.194399 | orchestrator | 15:06:22.194 STDOUT terraform:  + checksum = (known after apply) 2025-03-26 15:06:22.194407 | orchestrator | 15:06:22.194 STDOUT terraform:  + created_at = (known after apply) 2025-03-26 15:06:22.194442 | orchestrator | 15:06:22.194 STDOUT terraform:  + file = (known after apply) 2025-03-26 15:06:22.194486 | orchestrator | 15:06:22.194 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.194505 | orchestrator | 15:06:22.194 STDOUT terraform:  + metadata = (known after apply) 2025-03-26 15:06:22.194538 | orchestrator | 15:06:22.194 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-03-26 15:06:22.194572 | orchestrator | 15:06:22.194 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-03-26 15:06:22.194589 | orchestrator | 15:06:22.194 STDOUT terraform:  + most_recent = true 2025-03-26 15:06:22.194620 | orchestrator | 15:06:22.194 STDOUT terraform:  + name = (known after apply) 2025-03-26 15:06:22.194644 | orchestrator | 15:06:22.194 STDOUT terraform:  + protected = (known after apply) 2025-03-26 15:06:22.194675 | orchestrator | 15:06:22.194 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.194706 | orchestrator | 15:06:22.194 STDOUT terraform:  + schema = (known after apply) 2025-03-26 15:06:22.194746 | orchestrator | 15:06:22.194 STDOUT terraform:  + size_bytes = (known after apply) 2025-03-26 15:06:22.194771 | orchestrator | 15:06:22.194 STDOUT terraform:  + tags = (known after apply) 2025-03-26 15:06:22.194803 | orchestrator | 15:06:22.194 STDOUT terraform:  + updated_at = (known after apply) 2025-03-26 15:06:22.194821 | orchestrator | 15:06:22.194 STDOUT terraform:  } 2025-03-26 15:06:22.195033 | orchestrator | 15:06:22.194 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-03-26 15:06:22.195065 | orchestrator | 15:06:22.195 STDOUT terraform:  # (config refers to values not yet known) 2025-03-26 15:06:22.195098 | orchestrator | 15:06:22.195 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-03-26 15:06:22.195137 | orchestrator | 15:06:22.195 STDOUT terraform:  + checksum = (known after apply) 2025-03-26 15:06:22.195160 | orchestrator | 15:06:22.195 STDOUT terraform:  + created_at = (known after apply) 2025-03-26 15:06:22.195189 | orchestrator | 15:06:22.195 STDOUT terraform:  + file = (known after apply) 2025-03-26 15:06:22.195227 | orchestrator | 15:06:22.195 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.195251 | orchestrator | 15:06:22.195 STDOUT terraform:  + metadata = (known after apply) 2025-03-26 15:06:22.195283 | orchestrator | 15:06:22.195 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-03-26 15:06:22.195325 | orchestrator | 15:06:22.195 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-03-26 15:06:22.195348 | orchestrator | 15:06:22.195 STDOUT terraform:  + most_recent = true 2025-03-26 15:06:22.195380 | orchestrator | 15:06:22.195 STDOUT terraform:  + name = (known after apply) 2025-03-26 15:06:22.195413 | orchestrator | 15:06:22.195 STDOUT terraform:  + protected = (known after apply) 2025-03-26 15:06:22.195443 | orchestrator | 15:06:22.195 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.195479 | orchestrator | 15:06:22.195 STDOUT terraform:  + schema = (known after apply) 2025-03-26 15:06:22.195506 | orchestrator | 15:06:22.195 STDOUT terraform:  + size_bytes = (known after apply) 2025-03-26 15:06:22.195535 | orchestrator | 15:06:22.195 STDOUT terraform:  + tags = (known after apply) 2025-03-26 15:06:22.195572 | orchestrator | 15:06:22.195 STDOUT terraform:  + updated_at = (known after apply) 2025-03-26 15:06:22.195617 | orchestrator | 15:06:22.195 STDOUT terraform:  } 2025-03-26 15:06:22.195625 | orchestrator | 15:06:22.195 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-03-26 15:06:22.195656 | orchestrator | 15:06:22.195 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-03-26 15:06:22.195696 | orchestrator | 15:06:22.195 STDOUT terraform:  + content = (known after apply) 2025-03-26 15:06:22.195764 | orchestrator | 15:06:22.195 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-03-26 15:06:22.195806 | orchestrator | 15:06:22.195 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-03-26 15:06:22.195814 | orchestrator | 15:06:22.195 STDOUT terraform:  + content_md5 = (known after apply) 2025-03-26 15:06:22.195847 | orchestrator | 15:06:22.195 STDOUT terraform:  + content_sha1 = (known after apply) 2025-03-26 15:06:22.195882 | orchestrator | 15:06:22.195 STDOUT terraform:  + content_sha256 = (known after apply) 2025-03-26 15:06:22.195918 | orchestrator | 15:06:22.195 STDOUT terraform:  + content_sha512 = (known after apply) 2025-03-26 15:06:22.195966 | orchestrator | 15:06:22.195 STDOUT terraform:  + directory_permission = "0777" 2025-03-26 15:06:22.195991 | orchestrator | 15:06:22.195 STDOUT terraform:  + file_permission = "0644" 2025-03-26 15:06:22.196033 | orchestrator | 15:06:22.195 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-03-26 15:06:22.196069 | orchestrator | 15:06:22.196 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.196077 | orchestrator | 15:06:22.196 STDOUT terraform:  } 2025-03-26 15:06:22.196113 | orchestrator | 15:06:22.196 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-03-26 15:06:22.196136 | orchestrator | 15:06:22.196 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-03-26 15:06:22.196183 | orchestrator | 15:06:22.196 STDOUT terraform:  + content = (known after apply) 2025-03-26 15:06:22.196212 | orchestrator | 15:06:22.196 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-03-26 15:06:22.196248 | orchestrator | 15:06:22.196 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-03-26 15:06:22.196285 | orchestrator | 15:06:22.196 STDOUT terraform:  + content_md5 = (known after apply) 2025-03-26 15:06:22.196321 | orchestrator | 15:06:22.196 STDOUT terraform:  + content_sha1 = (known after apply) 2025-03-26 15:06:22.196360 | orchestrator | 15:06:22.196 STDOUT terraform:  + content_sha256 = (known after apply) 2025-03-26 15:06:22.196397 | orchestrator | 15:06:22.196 STDOUT terraform:  + content_sha512 = (known after apply) 2025-03-26 15:06:22.196433 | orchestrator | 15:06:22.196 STDOUT terraform:  + directory_permission = "0777" 2025-03-26 15:06:22.196441 | orchestrator | 15:06:22.196 STDOUT terraform:  + file_permission = "0644" 2025-03-26 15:06:22.196479 | orchestrator | 15:06:22.196 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-03-26 15:06:22.196518 | orchestrator | 15:06:22.196 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.196525 | orchestrator | 15:06:22.196 STDOUT terraform:  } 2025-03-26 15:06:22.196552 | orchestrator | 15:06:22.196 STDOUT terraform:  # local_file.inventory will be created 2025-03-26 15:06:22.196585 | orchestrator | 15:06:22.196 STDOUT terraform:  + resource "local_file" "inventory" { 2025-03-26 15:06:22.196616 | orchestrator | 15:06:22.196 STDOUT terraform:  + content = (known after apply) 2025-03-26 15:06:22.196654 | orchestrator | 15:06:22.196 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-03-26 15:06:22.196690 | orchestrator | 15:06:22.196 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-03-26 15:06:22.196728 | orchestrator | 15:06:22.196 STDOUT terraform:  + content_md5 = (known after apply) 2025-03-26 15:06:22.196765 | orchestrator | 15:06:22.196 STDOUT terraform:  + content_sha1 = (known after apply) 2025-03-26 15:06:22.196802 | orchestrator | 15:06:22.196 STDOUT terraform:  + content_sha256 = (known after apply) 2025-03-26 15:06:22.196839 | orchestrator | 15:06:22.196 STDOUT terraform:  + content_sha512 = (known after apply) 2025-03-26 15:06:22.196863 | orchestrator | 15:06:22.196 STDOUT terraform:  + directory_permission = "0777" 2025-03-26 15:06:22.196889 | orchestrator | 15:06:22.196 STDOUT terraform:  + file_permission = "0644" 2025-03-26 15:06:22.196949 | orchestrator | 15:06:22.196 STDOUT terraform:  + filename = "inventory.ci" 2025-03-26 15:06:22.196969 | orchestrator | 15:06:22.196 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.196985 | orchestrator | 15:06:22.196 STDOUT terraform:  } 2025-03-26 15:06:22.197010 | orchestrator | 15:06:22.196 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-03-26 15:06:22.197041 | orchestrator | 15:06:22.197 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-03-26 15:06:22.197080 | orchestrator | 15:06:22.197 STDOUT terraform:  + content = (sensitive value) 2025-03-26 15:06:22.197109 | orchestrator | 15:06:22.197 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-03-26 15:06:22.197152 | orchestrator | 15:06:22.197 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-03-26 15:06:22.197181 | orchestrator | 15:06:22.197 STDOUT terraform:  + content_md5 = (known after apply) 2025-03-26 15:06:22.197221 | orchestrator | 15:06:22.197 STDOUT terraform:  + content_sha1 = (known after apply) 2025-03-26 15:06:22.197255 | orchestrator | 15:06:22.197 STDOUT terraform:  + content_sha256 = (known after apply) 2025-03-26 15:06:22.197290 | orchestrator | 15:06:22.197 STDOUT terraform:  + content_sha512 = (known after apply) 2025-03-26 15:06:22.197321 | orchestrator | 15:06:22.197 STDOUT terraform:  + directory_permission = "0700" 2025-03-26 15:06:22.197341 | orchestrator | 15:06:22.197 STDOUT terraform:  + file_permission = "0600" 2025-03-26 15:06:22.197374 | orchestrator | 15:06:22.197 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-03-26 15:06:22.197419 | orchestrator | 15:06:22.197 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.197450 | orchestrator | 15:06:22.197 STDOUT terraform:  } 2025-03-26 15:06:22.197458 | orchestrator | 15:06:22.197 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-03-26 15:06:22.197490 | orchestrator | 15:06:22.197 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-03-26 15:06:22.197498 | orchestrator | 15:06:22.197 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.197517 | orchestrator | 15:06:22.197 STDOUT terraform:  } 2025-03-26 15:06:22.197595 | orchestrator | 15:06:22.197 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-03-26 15:06:22.197634 | orchestrator | 15:06:22.197 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-03-26 15:06:22.197674 | orchestrator | 15:06:22.197 STDOUT terraform:  + attachment = (known after apply) 2025-03-26 15:06:22.197684 | orchestrator | 15:06:22.197 STDOUT terraform:  + availability_zone = "nova" 2025-03-26 15:06:22.197721 | orchestrator | 15:06:22.197 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.197755 | orchestrator | 15:06:22.197 STDOUT terraform:  + image_id = (known after apply) 2025-03-26 15:06:22.197786 | orchestrator | 15:06:22.197 STDOUT terraform:  + metadata = (known after apply) 2025-03-26 15:06:22.197832 | orchestrator | 15:06:22.197 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-03-26 15:06:22.197885 | orchestrator | 15:06:22.197 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.197912 | orchestrator | 15:06:22.197 STDOUT terraform:  + size = 80 2025-03-26 15:06:22.197940 | orchestrator | 15:06:22.197 STDOUT terraform:  + volume_type = "ssd" 2025-03-26 15:06:22.197947 | orchestrator | 15:06:22.197 STDOUT terraform:  } 2025-03-26 15:06:22.197997 | orchestrator | 15:06:22.197 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-03-26 15:06:22.198059 | orchestrator | 15:06:22.197 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-03-26 15:06:22.198089 | orchestrator | 15:06:22.198 STDOUT terraform:  + attachment = (known after apply) 2025-03-26 15:06:22.198120 | orchestrator | 15:06:22.198 STDOUT terraform:  + availability_zone = "nova" 2025-03-26 15:06:22.198146 | orchestrator | 15:06:22.198 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.198179 | orchestrator | 15:06:22.198 STDOUT terraform:  + image_id = (known after apply) 2025-03-26 15:06:22.198213 | orchestrator | 15:06:22.198 STDOUT terraform:  + metadata = (known after apply) 2025-03-26 15:06:22.198258 | orchestrator | 15:06:22.198 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-03-26 15:06:22.198291 | orchestrator | 15:06:22.198 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.198321 | orchestrator | 15:06:22.198 STDOUT terraform:  + size = 80 2025-03-26 15:06:22.198352 | orchestrator | 15:06:22.198 STDOUT terraform:  + volume_type = "ssd" 2025-03-26 15:06:22.198403 | orchestrator | 15:06:22.198 STDOUT terraform:  } 2025-03-26 15:06:22.198411 | orchestrator | 15:06:22.198 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-03-26 15:06:22.198452 | orchestrator | 15:06:22.198 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-03-26 15:06:22.198484 | orchestrator | 15:06:22.198 STDOUT terraform:  + attachment = (known after apply) 2025-03-26 15:06:22.198515 | orchestrator | 15:06:22.198 STDOUT terraform:  + availability_zone = "nova" 2025-03-26 15:06:22.198542 | orchestrator | 15:06:22.198 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.198573 | orchestrator | 15:06:22.198 STDOUT terraform:  + image_id = (known after apply) 2025-03-26 15:06:22.198612 | orchestrator | 15:06:22.198 STDOUT terraform:  + metadata = (known after apply) 2025-03-26 15:06:22.198647 | orchestrator | 15:06:22.198 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-03-26 15:06:22.198688 | orchestrator | 15:06:22.198 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.198700 | orchestrator | 15:06:22.198 STDOUT terraform:  + size = 80 2025-03-26 15:06:22.198719 | orchestrator | 15:06:22.198 STDOUT terraform:  + volume_type = "ssd" 2025-03-26 15:06:22.198726 | orchestrator | 15:06:22.198 STDOUT terraform:  } 2025-03-26 15:06:22.198774 | orchestrator | 15:06:22.198 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-03-26 15:06:22.198821 | orchestrator | 15:06:22.198 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-03-26 15:06:22.198852 | orchestrator | 15:06:22.198 STDOUT terraform:  + attachment = (known after apply) 2025-03-26 15:06:22.198868 | orchestrator | 15:06:22.198 STDOUT terraform:  + availability_zone = "nova" 2025-03-26 15:06:22.198903 | orchestrator | 15:06:22.198 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.198952 | orchestrator | 15:06:22.198 STDOUT terraform:  + image_id = (known after apply) 2025-03-26 15:06:22.198977 | orchestrator | 15:06:22.198 STDOUT terraform:  + metadata = (known after apply) 2025-03-26 15:06:22.199018 | orchestrator | 15:06:22.198 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-03-26 15:06:22.199050 | orchestrator | 15:06:22.199 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.199069 | orchestrator | 15:06:22.199 STDOUT terraform:  + size = 80 2025-03-26 15:06:22.199092 | orchestrator | 15:06:22.199 STDOUT terraform:  + volume_type = "ssd" 2025-03-26 15:06:22.199099 | orchestrator | 15:06:22.199 STDOUT terraform:  } 2025-03-26 15:06:22.199149 | orchestrator | 15:06:22.199 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-03-26 15:06:22.199199 | orchestrator | 15:06:22.199 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-03-26 15:06:22.199229 | orchestrator | 15:06:22.199 STDOUT terraform:  + attachment = (known after apply) 2025-03-26 15:06:22.199252 | orchestrator | 15:06:22.199 STDOUT terraform:  + availability_zone = "nova" 2025-03-26 15:06:22.199284 | orchestrator | 15:06:22.199 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.199316 | orchestrator | 15:06:22.199 STDOUT terraform:  + image_id = (known after apply) 2025-03-26 15:06:22.199355 | orchestrator | 15:06:22.199 STDOUT terraform:  + metadata = (known after apply) 2025-03-26 15:06:22.199389 | orchestrator | 15:06:22.199 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-03-26 15:06:22.199421 | orchestrator | 15:06:22.199 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.199442 | orchestrator | 15:06:22.199 STDOUT terraform:  + size = 80 2025-03-26 15:06:22.199464 | orchestrator | 15:06:22.199 STDOUT terraform:  + volume_type = "ssd" 2025-03-26 15:06:22.199472 | orchestrator | 15:06:22.199 STDOUT terraform:  } 2025-03-26 15:06:22.199523 | orchestrator | 15:06:22.199 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-03-26 15:06:22.199582 | orchestrator | 15:06:22.199 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-03-26 15:06:22.199610 | orchestrator | 15:06:22.199 STDOUT terraform:  + attachment = (known after apply) 2025-03-26 15:06:22.199630 | orchestrator | 15:06:22.199 STDOUT terraform:  + availability_zone = "nova" 2025-03-26 15:06:22.199661 | orchestrator | 15:06:22.199 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.199693 | orchestrator | 15:06:22.199 STDOUT terraform:  + image_id = (known after apply) 2025-03-26 15:06:22.199725 | orchestrator | 15:06:22.199 STDOUT terraform:  + metadata = (known after apply) 2025-03-26 15:06:22.199765 | orchestrator | 15:06:22.199 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-03-26 15:06:22.199796 | orchestrator | 15:06:22.199 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.199817 | orchestrator | 15:06:22.199 STDOUT terraform:  + size = 80 2025-03-26 15:06:22.199839 | orchestrator | 15:06:22.199 STDOUT terraform:  + volume_type = "ssd" 2025-03-26 15:06:22.199846 | orchestrator | 15:06:22.199 STDOUT terraform:  } 2025-03-26 15:06:22.199897 | orchestrator | 15:06:22.199 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-03-26 15:06:22.199969 | orchestrator | 15:06:22.199 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-03-26 15:06:22.199990 | orchestrator | 15:06:22.199 STDOUT terraform:  + attachment = (known after apply) 2025-03-26 15:06:22.200013 | orchestrator | 15:06:22.199 STDOUT terraform:  + availability_zone = "nova" 2025-03-26 15:06:22.200047 | orchestrator | 15:06:22.200 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.200078 | orchestrator | 15:06:22.200 STDOUT terraform:  + image_id = (known after apply) 2025-03-26 15:06:22.200110 | orchestrator | 15:06:22.200 STDOUT terraform:  + metadata = (known after apply) 2025-03-26 15:06:22.200151 | orchestrator | 15:06:22.200 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-03-26 15:06:22.200183 | orchestrator | 15:06:22.200 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.200205 | orchestrator | 15:06:22.200 STDOUT terraform:  + size = 80 2025-03-26 15:06:22.200227 | orchestrator | 15:06:22.200 STDOUT terraform:  + volume_type = "ssd" 2025-03-26 15:06:22.200234 | orchestrator | 15:06:22.200 STDOUT terraform:  } 2025-03-26 15:06:22.200283 | orchestrator | 15:06:22.200 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-03-26 15:06:22.200327 | orchestrator | 15:06:22.200 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-26 15:06:22.200360 | orchestrator | 15:06:22.200 STDOUT terraform:  + attachment = (known after apply) 2025-03-26 15:06:22.200382 | orchestrator | 15:06:22.200 STDOUT terraform:  + availability_zone = "nova" 2025-03-26 15:06:22.200416 | orchestrator | 15:06:22.200 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.200446 | orchestrator | 15:06:22.200 STDOUT terraform:  + metadata = (known after apply) 2025-03-26 15:06:22.200488 | orchestrator | 15:06:22.200 STDOUT terraform:  + name = "testbed-volume-0-node-0" 2025-03-26 15:06:22.200519 | orchestrator | 15:06:22.200 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.200540 | orchestrator | 15:06:22.200 STDOUT terraform:  + size = 20 2025-03-26 15:06:22.200563 | orchestrator | 15:06:22.200 STDOUT terraform:  + volume_type = "ssd" 2025-03-26 15:06:22.200571 | orchestrator | 15:06:22.200 STDOUT terraform:  } 2025-03-26 15:06:22.200619 | orchestrator | 15:06:22.200 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-03-26 15:06:22.200663 | orchestrator | 15:06:22.200 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-26 15:06:22.200698 | orchestrator | 15:06:22.200 STDOUT terraform:  + attachment = (known after apply) 2025-03-26 15:06:22.200718 | orchestrator | 15:06:22.200 STDOUT terraform:  + availability_zone = "nova" 2025-03-26 15:06:22.200751 | orchestrator | 15:06:22.200 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.200783 | orchestrator | 15:06:22.200 STDOUT terraform:  + metadata = (known after apply) 2025-03-26 15:06:22.200822 | orchestrator | 15:06:22.200 STDOUT terraform:  + name = "testbed-volume-1-node-1" 2025-03-26 15:06:22.200853 | orchestrator | 15:06:22.200 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.200872 | orchestrator | 15:06:22.200 STDOUT terraform:  + size = 20 2025-03-26 15:06:22.200894 | orchestrator | 15:06:22.200 STDOUT terraform:  + volume_type = "ssd" 2025-03-26 15:06:22.200901 | orchestrator | 15:06:22.200 STDOUT terraform:  } 2025-03-26 15:06:22.200957 | orchestrator | 15:06:22.200 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-03-26 15:06:22.201001 | orchestrator | 15:06:22.200 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-26 15:06:22.201032 | orchestrator | 15:06:22.200 STDOUT terraform:  + attachment = (known after apply) 2025-03-26 15:06:22.201053 | orchestrator | 15:06:22.201 STDOUT terraform:  + availability_zone = "nova" 2025-03-26 15:06:22.201084 | orchestrator | 15:06:22.201 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.201118 | orchestrator | 15:06:22.201 STDOUT terraform:  + metadata = (known after apply) 2025-03-26 15:06:22.201155 | orchestrator | 15:06:22.201 STDOUT terraform:  + name = "testbed-volume-2-node-2" 2025-03-26 15:06:22.201187 | orchestrator | 15:06:22.201 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.201209 | orchestrator | 15:06:22.201 STDOUT terraform:  + size = 20 2025-03-26 15:06:22.201231 | orchestrator | 15:06:22.201 STDOUT terraform:  + volume_type = "ssd" 2025-03-26 15:06:22.201239 | orchestrator | 15:06:22.201 STDOUT terraform:  } 2025-03-26 15:06:22.201288 | orchestrator | 15:06:22.201 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-03-26 15:06:22.201333 | orchestrator | 15:06:22.201 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-26 15:06:22.201364 | orchestrator | 15:06:22.201 STDOUT terraform:  + attachment = (known after apply) 2025-03-26 15:06:22.201385 | orchestrator | 15:06:22.201 STDOUT terraform:  + availability_zone = "nova" 2025-03-26 15:06:22.201418 | orchestrator | 15:06:22.201 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.201448 | orchestrator | 15:06:22.201 STDOUT terraform:  + metadata = (known after apply) 2025-03-26 15:06:22.201487 | orchestrator | 15:06:22.201 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-03-26 15:06:22.201518 | orchestrator | 15:06:22.201 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.201539 | orchestrator | 15:06:22.201 STDOUT terraform:  + size = 20 2025-03-26 15:06:22.201561 | orchestrator | 15:06:22.201 STDOUT terraform:  + volume_type = "ssd" 2025-03-26 15:06:22.201568 | orchestrator | 15:06:22.201 STDOUT terraform:  } 2025-03-26 15:06:22.201617 | orchestrator | 15:06:22.201 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-03-26 15:06:22.201660 | orchestrator | 15:06:22.201 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-26 15:06:22.201691 | orchestrator | 15:06:22.201 STDOUT terraform:  + attachment = (known after apply) 2025-03-26 15:06:22.201713 | orchestrator | 15:06:22.201 STDOUT terraform:  + availability_zone = "nova" 2025-03-26 15:06:22.201745 | orchestrator | 15:06:22.201 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.201776 | orchestrator | 15:06:22.201 STDOUT terraform:  + metadata = (known after apply) 2025-03-26 15:06:22.201815 | orchestrator | 15:06:22.201 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-03-26 15:06:22.201847 | orchestrator | 15:06:22.201 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.201867 | orchestrator | 15:06:22.201 STDOUT terraform:  + size = 20 2025-03-26 15:06:22.201889 | orchestrator | 15:06:22.201 STDOUT terraform:  + volume_type = "ssd" 2025-03-26 15:06:22.201897 | orchestrator | 15:06:22.201 STDOUT terraform:  } 2025-03-26 15:06:22.201962 | orchestrator | 15:06:22.201 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-03-26 15:06:22.202006 | orchestrator | 15:06:22.201 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-26 15:06:22.202049 | orchestrator | 15:06:22.202 STDOUT terraform:  + attachment = (known after apply) 2025-03-26 15:06:22.202071 | orchestrator | 15:06:22.202 STDOUT terraform:  + availability_zone = "nova" 2025-03-26 15:06:22.202103 | orchestrator | 15:06:22.202 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.202137 | orchestrator | 15:06:22.202 STDOUT terraform:  + metadata = (known after apply) 2025-03-26 15:06:22.202174 | orchestrator | 15:06:22.202 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-03-26 15:06:22.202208 | orchestrator | 15:06:22.202 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.202229 | orchestrator | 15:06:22.202 STDOUT terraform:  + size = 20 2025-03-26 15:06:22.202252 | orchestrator | 15:06:22.202 STDOUT terraform:  + volume_type = "ssd" 2025-03-26 15:06:22.202259 | orchestrator | 15:06:22.202 STDOUT terraform:  } 2025-03-26 15:06:22.202307 | orchestrator | 15:06:22.202 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-03-26 15:06:22.202351 | orchestrator | 15:06:22.202 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-26 15:06:22.202383 | orchestrator | 15:06:22.202 STDOUT terraform:  + attachment = (known after apply) 2025-03-26 15:06:22.202403 | orchestrator | 15:06:22.202 STDOUT terraform:  + availability_zone = "nova" 2025-03-26 15:06:22.202436 | orchestrator | 15:06:22.202 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.202467 | orchestrator | 15:06:22.202 STDOUT terraform:  + metadata = (known after apply) 2025-03-26 15:06:22.202508 | orchestrator | 15:06:22.202 STDOUT terraform:  + name = "testbed-volume-6-node-0" 2025-03-26 15:06:22.202539 | orchestrator | 15:06:22.202 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.202561 | orchestrator | 15:06:22.202 STDOUT terraform:  + size = 20 2025-03-26 15:06:22.202585 | orchestrator | 15:06:22.202 STDOUT terraform:  + volume_type = "ssd" 2025-03-26 15:06:22.202592 | orchestrator | 15:06:22.202 STDOUT terraform:  } 2025-03-26 15:06:22.202672 | orchestrator | 15:06:22.202 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-03-26 15:06:22.202716 | orchestrator | 15:06:22.202 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-26 15:06:22.202748 | orchestrator | 15:06:22.202 STDOUT terraform:  + attachment = (known after apply) 2025-03-26 15:06:22.202770 | orchestrator | 15:06:22.202 STDOUT terraform:  + availability_zone = "nova" 2025-03-26 15:06:22.202804 | orchestrator | 15:06:22.202 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.202835 | orchestrator | 15:06:22.202 STDOUT terraform:  + metadata = (known after apply) 2025-03-26 15:06:22.202875 | orchestrator | 15:06:22.202 STDOUT terraform:  + name = "testbed-volume-7-node-1" 2025-03-26 15:06:22.202906 | orchestrator | 15:06:22.202 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.202937 | orchestrator | 15:06:22.202 STDOUT terraform:  + size = 20 2025-03-26 15:06:22.202967 | orchestrator | 15:06:22.202 STDOUT terraform:  + volume_type = "ssd" 2025-03-26 15:06:22.202974 | orchestrator | 15:06:22.202 STDOUT terraform:  } 2025-03-26 15:06:22.203023 | orchestrator | 15:06:22.202 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-03-26 15:06:22.203068 | orchestrator | 15:06:22.203 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-26 15:06:22.203100 | orchestrator | 15:06:22.203 STDOUT terraform:  + attachment = (known after apply) 2025-03-26 15:06:22.203117 | orchestrator | 15:06:22.203 STDOUT terraform:  + availability_zone = "nova" 2025-03-26 15:06:22.203151 | orchestrator | 15:06:22.203 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.203181 | orchestrator | 15:06:22.203 STDOUT terraform:  + metadata = (known after apply) 2025-03-26 15:06:22.203219 | orchestrator | 15:06:22.203 STDOUT terraform:  + name = "testbed-volume-8-node-2" 2025-03-26 15:06:22.203251 | orchestrator | 15:06:22.203 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.203273 | orchestrator | 15:06:22.203 STDOUT terraform:  + size = 20 2025-03-26 15:06:22.203294 | orchestrator | 15:06:22.203 STDOUT terraform:  + volume_type = "ssd" 2025-03-26 15:06:22.203302 | orchestrator | 15:06:22.203 STDOUT terraform:  } 2025-03-26 15:06:22.203352 | orchestrator | 15:06:22.203 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[9] will be created 2025-03-26 15:06:22.203397 | orchestrator | 15:06:22.203 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-26 15:06:22.203428 | orchestrator | 15:06:22.203 STDOUT terraform:  + attachment = (known after apply) 2025-03-26 15:06:22.203450 | orchestrator | 15:06:22.203 STDOUT terraform:  + availability_zone = "nova" 2025-03-26 15:06:22.203482 | orchestrator | 15:06:22.203 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.203513 | orchestrator | 15:06:22.203 STDOUT terraform:  + metadata = (known after apply) 2025-03-26 15:06:22.203552 | orchestrator | 15:06:22.203 STDOUT terraform:  + name = "testbed-volume-9-node-3" 2025-03-26 15:06:22.203584 | orchestrator | 15:06:22.203 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.203608 | orchestrator | 15:06:22.203 STDOUT terraform:  + size = 20 2025-03-26 15:06:22.203631 | orchestrator | 15:06:22.203 STDOUT terraform:  + volume_type = "ssd" 2025-03-26 15:06:22.203638 | orchestrator | 15:06:22.203 STDOUT terraform:  } 2025-03-26 15:06:22.203687 | orchestrator | 15:06:22.203 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[10] will be created 2025-03-26 15:06:22.203731 | orchestrator | 15:06:22.203 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-26 15:06:22.203764 | orchestrator | 15:06:22.203 STDOUT terraform:  + attachment = (known after apply) 2025-03-26 15:06:22.203785 | orchestrator | 15:06:22.203 STDOUT terraform:  + availability_zone = "nova" 2025-03-26 15:06:22.203817 | orchestrator | 15:06:22.203 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.203850 | orchestrator | 15:06:22.203 STDOUT terraform:  + metadata = (known after apply) 2025-03-26 15:06:22.203890 | orchestrator | 15:06:22.203 STDOUT terraform:  + name = "testbed-volume-10-node-4" 2025-03-26 15:06:22.203921 | orchestrator | 15:06:22.203 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.203948 | orchestrator | 15:06:22.203 STDOUT terraform:  + size = 20 2025-03-26 15:06:22.203967 | orchestrator | 15:06:22.203 STDOUT terraform:  + volume_type = "ssd" 2025-03-26 15:06:22.203974 | orchestrator | 15:06:22.203 STDOUT terraform:  } 2025-03-26 15:06:22.204022 | orchestrator | 15:06:22.203 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[11] will be created 2025-03-26 15:06:22.204067 | orchestrator | 15:06:22.204 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-26 15:06:22.204097 | orchestrator | 15:06:22.204 STDOUT terraform:  + attachment = (known after apply) 2025-03-26 15:06:22.204121 | orchestrator | 15:06:22.204 STDOUT terraform:  + availability_zone = "nova" 2025-03-26 15:06:22.204154 | orchestrator | 15:06:22.204 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.204185 | orchestrator | 15:06:22.204 STDOUT terraform:  + metadata = (known after apply) 2025-03-26 15:06:22.204224 | orchestrator | 15:06:22.204 STDOUT terraform:  + name = "testbed-volume-11-node-5" 2025-03-26 15:06:22.204257 | orchestrator | 15:06:22.204 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.204277 | orchestrator | 15:06:22.204 STDOUT terraform:  + size = 20 2025-03-26 15:06:22.204298 | orchestrator | 15:06:22.204 STDOUT terraform:  + volume_type = "ssd" 2025-03-26 15:06:22.204305 | orchestrator | 15:06:22.204 STDOUT terraform:  } 2025-03-26 15:06:22.204353 | orchestrator | 15:06:22.204 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[12] will be created 2025-03-26 15:06:22.204397 | orchestrator | 15:06:22.204 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-26 15:06:22.204428 | orchestrator | 15:06:22.204 STDOUT terraform:  + attachment = (known after apply) 2025-03-26 15:06:22.204449 | orchestrator | 15:06:22.204 STDOUT terraform:  + availability_zone = "nova" 2025-03-26 15:06:22.204483 | orchestrator | 15:06:22.204 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.204527 | orchestrator | 15:06:22.204 STDOUT terraform:  + metadata = (known after apply) 2025-03-26 15:06:22.204568 | orchestrator | 15:06:22.204 STDOUT terraform:  + name = "testbed-volume-12-node-0" 2025-03-26 15:06:22.204598 | orchestrator | 15:06:22.204 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.204619 | orchestrator | 15:06:22.204 STDOUT terraform:  + size = 20 2025-03-26 15:06:22.204641 | orchestrator | 15:06:22.204 STDOUT terraform:  + volume_type = "ssd" 2025-03-26 15:06:22.204658 | orchestrator | 15:06:22.204 STDOUT terraform:  } 2025-03-26 15:06:22.204698 | orchestrator | 15:06:22.204 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[13] will be created 2025-03-26 15:06:22.204743 | orchestrator | 15:06:22.204 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-26 15:06:22.204774 | orchestrator | 15:06:22.204 STDOUT terraform:  + attachment = (known after apply) 2025-03-26 15:06:22.204796 | orchestrator | 15:06:22.204 STDOUT terraform:  + availability_zone = "nova" 2025-03-26 15:06:22.204827 | orchestrator | 15:06:22.204 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.204858 | orchestrator | 15:06:22.204 STDOUT terraform:  + metadata = (known after apply) 2025-03-26 15:06:22.204898 | orchestrator | 15:06:22.204 STDOUT terraform:  + name = "testbed-volume-13-node-1" 2025-03-26 15:06:22.204940 | orchestrator | 15:06:22.204 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.204958 | orchestrator | 15:06:22.204 STDOUT terraform:  + size = 20 2025-03-26 15:06:22.204978 | orchestrator | 15:06:22.204 STDOUT terraform:  + volume_type = "ssd" 2025-03-26 15:06:22.204994 | orchestrator | 15:06:22.204 STDOUT terraform:  } 2025-03-26 15:06:22.205041 | orchestrator | 15:06:22.204 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[14] will be created 2025-03-26 15:06:22.205085 | orchestrator | 15:06:22.205 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-26 15:06:22.205116 | orchestrator | 15:06:22.205 STDOUT terraform:  + attachment = (known after apply) 2025-03-26 15:06:22.205137 | orchestrator | 15:06:22.205 STDOUT terraform:  + availability_zone = "nova" 2025-03-26 15:06:22.205169 | orchestrator | 15:06:22.205 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.205201 | orchestrator | 15:06:22.205 STDOUT terraform:  + metadata = (known after apply) 2025-03-26 15:06:22.205241 | orchestrator | 15:06:22.205 STDOUT terraform:  + name = "testbed-volume-14-node-2" 2025-03-26 15:06:22.205273 | orchestrator | 15:06:22.205 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.205294 | orchestrator | 15:06:22.205 STDOUT terraform:  + size = 20 2025-03-26 15:06:22.205316 | orchestrator | 15:06:22.205 STDOUT terraform:  + volume_type = "ssd" 2025-03-26 15:06:22.205323 | orchestrator | 15:06:22.205 STDOUT terraform:  } 2025-03-26 15:06:22.205372 | orchestrator | 15:06:22.205 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[15] will be created 2025-03-26 15:06:22.205418 | orchestrator | 15:06:22.205 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-26 15:06:22.205448 | orchestrator | 15:06:22.205 STDOUT terraform:  + attachment = (known after apply) 2025-03-26 15:06:22.205468 | orchestrator | 15:06:22.205 STDOUT terraform:  + availability_zone = "nova" 2025-03-26 15:06:22.205501 | orchestrator | 15:06:22.205 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.205535 | orchestrator | 15:06:22.205 STDOUT terraform:  + metadata = (known after apply) 2025-03-26 15:06:22.205573 | orchestrator | 15:06:22.205 STDOUT terraform:  + name = "testbed-volume-15-node-3" 2025-03-26 15:06:22.205606 | orchestrator | 15:06:22.205 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.205630 | orchestrator | 15:06:22.205 STDOUT terraform:  + size = 20 2025-03-26 15:06:22.205652 | orchestrator | 15:06:22.205 STDOUT terraform:  + volume_type = "ssd" 2025-03-26 15:06:22.205659 | orchestrator | 15:06:22.205 STDOUT terraform:  } 2025-03-26 15:06:22.205708 | orchestrator | 15:06:22.205 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[16] will be created 2025-03-26 15:06:22.205752 | orchestrator | 15:06:22.205 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-26 15:06:22.205783 | orchestrator | 15:06:22.205 STDOUT terraform:  + attachment = (known after apply) 2025-03-26 15:06:22.205804 | orchestrator | 15:06:22.205 STDOUT terraform:  + availability_zone = "nova" 2025-03-26 15:06:22.205836 | orchestrator | 15:06:22.205 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.205867 | orchestrator | 15:06:22.205 STDOUT terraform:  + metadata = (known after apply) 2025-03-26 15:06:22.205906 | orchestrator | 15:06:22.205 STDOUT terraform:  + name = "testbed-volume-16-node-4" 2025-03-26 15:06:22.205955 | orchestrator | 15:06:22.205 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.205976 | orchestrator | 15:06:22.205 STDOUT terraform:  + size = 20 2025-03-26 15:06:22.205999 | orchestrator | 15:06:22.205 STDOUT terraform:  + volume_type = "ssd" 2025-03-26 15:06:22.206006 | orchestrator | 15:06:22.205 STDOUT terraform:  } 2025-03-26 15:06:22.206068 | orchestrator | 15:06:22.206 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[17] will be created 2025-03-26 15:06:22.206110 | orchestrator | 15:06:22.206 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-03-26 15:06:22.206141 | orchestrator | 15:06:22.206 STDOUT terraform:  + attachment = (known after apply) 2025-03-26 15:06:22.206162 | orchestrator | 15:06:22.206 STDOUT terraform:  + availability_zone = "nova" 2025-03-26 15:06:22.206195 | orchestrator | 15:06:22.206 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.206227 | orchestrator | 15:06:22.206 STDOUT terraform:  + metadata = (known after apply) 2025-03-26 15:06:22.206266 | orchestrator | 15:06:22.206 STDOUT terraform:  + name = "testbed-volume-17-node-5" 2025-03-26 15:06:22.206296 | orchestrator | 15:06:22.206 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.206318 | orchestrator | 15:06:22.206 STDOUT terraform:  + size = 20 2025-03-26 15:06:22.206340 | orchestrator | 15:06:22.206 STDOUT terraform:  + volume_type = "ssd" 2025-03-26 15:06:22.206347 | orchestrator | 15:06:22.206 STDOUT terraform:  } 2025-03-26 15:06:22.206396 | orchestrator | 15:06:22.206 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-03-26 15:06:22.206441 | orchestrator | 15:06:22.206 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-03-26 15:06:22.206475 | orchestrator | 15:06:22.206 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-03-26 15:06:22.206512 | orchestrator | 15:06:22.206 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-03-26 15:06:22.206548 | orchestrator | 15:06:22.206 STDOUT terraform:  + all_metadata = (known after apply) 2025-03-26 15:06:22.206585 | orchestrator | 15:06:22.206 STDOUT terraform:  + all_tags = (known after apply) 2025-03-26 15:06:22.206610 | orchestrator | 15:06:22.206 STDOUT terraform:  + availability_zone = "nova" 2025-03-26 15:06:22.206631 | orchestrator | 15:06:22.206 STDOUT terraform:  + config_drive = true 2025-03-26 15:06:22.206667 | orchestrator | 15:06:22.206 STDOUT terraform:  + created = (known after apply) 2025-03-26 15:06:22.206704 | orchestrator | 15:06:22.206 STDOUT terraform:  + flavor_id = (known after apply) 2025-03-26 15:06:22.206734 | orchestrator | 15:06:22.206 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-03-26 15:06:22.206758 | orchestrator | 15:06:22.206 STDOUT terraform:  + force_delete = false 2025-03-26 15:06:22.206796 | orchestrator | 15:06:22.206 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.206832 | orchestrator | 15:06:22.206 STDOUT terraform:  + image_id = (known after apply) 2025-03-26 15:06:22.206868 | orchestrator | 15:06:22.206 STDOUT terraform:  + image_name = (known after apply) 2025-03-26 15:06:22.206894 | orchestrator | 15:06:22.206 STDOUT terraform:  + key_pair = "testbed" 2025-03-26 15:06:22.206937 | orchestrator | 15:06:22.206 STDOUT terraform:  + name = "testbed-manager" 2025-03-26 15:06:22.206958 | orchestrator | 15:06:22.206 STDOUT terraform:  + power_state = "active" 2025-03-26 15:06:22.206994 | orchestrator | 15:06:22.206 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.207029 | orchestrator | 15:06:22.206 STDOUT terraform:  + security_groups = (known after apply) 2025-03-26 15:06:22.207053 | orchestrator | 15:06:22.207 STDOUT terraform:  + stop_before_destroy = false 2025-03-26 15:06:22.207088 | orchestrator | 15:06:22.207 STDOUT terraform:  + updated = (known after apply) 2025-03-26 15:06:22.207131 | orchestrator | 15:06:22.207 STDOUT terraform:  + user_data = (known after apply) 2025-03-26 15:06:22.207149 | orchestrator | 15:06:22.207 STDOUT terraform:  + block_device { 2025-03-26 15:06:22.207174 | orchestrator | 15:06:22.207 STDOUT terraform:  + boot_index = 0 2025-03-26 15:06:22.207202 | orchestrator | 15:06:22.207 STDOUT terraform:  + delete_on_termination = false 2025-03-26 15:06:22.207233 | orchestrator | 15:06:22.207 STDOUT terraform:  + destination_type = "volume" 2025-03-26 15:06:22.207263 | orchestrator | 15:06:22.207 STDOUT terraform:  + multiattach = false 2025-03-26 15:06:22.207294 | orchestrator | 15:06:22.207 STDOUT terraform:  + source_type = "volume" 2025-03-26 15:06:22.207334 | orchestrator | 15:06:22.207 STDOUT terraform:  + uuid = (known after apply) 2025-03-26 15:06:22.207352 | orchestrator | 15:06:22.207 STDOUT terraform:  } 2025-03-26 15:06:22.207359 | orchestrator | 15:06:22.207 STDOUT terraform:  + network { 2025-03-26 15:06:22.207382 | orchestrator | 15:06:22.207 STDOUT terraform:  + access_network = false 2025-03-26 15:06:22.207414 | orchestrator | 15:06:22.207 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-03-26 15:06:22.207446 | orchestrator | 15:06:22.207 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-03-26 15:06:22.207479 | orchestrator | 15:06:22.207 STDOUT terraform:  + mac = (known after apply) 2025-03-26 15:06:22.207513 | orchestrator | 15:06:22.207 STDOUT terraform:  + name = (known after apply) 2025-03-26 15:06:22.207545 | orchestrator | 15:06:22.207 STDOUT terraform:  + port = (known after apply) 2025-03-26 15:06:22.207578 | orchestrator | 15:06:22.207 STDOUT terraform:  + uuid = (known after apply) 2025-03-26 15:06:22.207586 | orchestrator | 15:06:22.207 STDOUT terraform:  } 2025-03-26 15:06:22.207602 | orchestrator | 15:06:22.207 STDOUT terraform:  } 2025-03-26 15:06:22.207647 | orchestrator | 15:06:22.207 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-03-26 15:06:22.207691 | orchestrator | 15:06:22.207 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-03-26 15:06:22.207726 | orchestrator | 15:06:22.207 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-03-26 15:06:22.207762 | orchestrator | 15:06:22.207 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-03-26 15:06:22.207798 | orchestrator | 15:06:22.207 STDOUT terraform:  + all_metadata = (known after apply) 2025-03-26 15:06:22.207835 | orchestrator | 15:06:22.207 STDOUT terraform:  + all_tags = (known after apply) 2025-03-26 15:06:22.207859 | orchestrator | 15:06:22.207 STDOUT terraform:  + availability_zone = "nova" 2025-03-26 15:06:22.207882 | orchestrator | 15:06:22.207 STDOUT terraform:  + config_drive = true 2025-03-26 15:06:22.207918 | orchestrator | 15:06:22.207 STDOUT terraform:  + created = (known after apply) 2025-03-26 15:06:22.207991 | orchestrator | 15:06:22.207 STDOUT terraform:  + flavor_id = (known after apply) 2025-03-26 15:06:22.208021 | orchestrator | 15:06:22.207 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-03-26 15:06:22.208046 | orchestrator | 15:06:22.208 STDOUT terraform:  + force_delete = false 2025-03-26 15:06:22.208083 | orchestrator | 15:06:22.208 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.208121 | orchestrator | 15:06:22.208 STDOUT terraform:  + image_id = (known after apply) 2025-03-26 15:06:22.208160 | orchestrator | 15:06:22.208 STDOUT terraform:  + image_name = (known after apply) 2025-03-26 15:06:22.208188 | orchestrator | 15:06:22.208 STDOUT terraform:  + key_pair = "testbed" 2025-03-26 15:06:22.208222 | orchestrator | 15:06:22.208 STDOUT terraform:  + name = "testbed-node-0" 2025-03-26 15:06:22.208248 | orchestrator | 15:06:22.208 STDOUT terraform:  + power_state = "active" 2025-03-26 15:06:22.208285 | orchestrator | 15:06:22.208 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.208321 | orchestrator | 15:06:22.208 STDOUT terraform:  + security_groups = (known after apply) 2025-03-26 15:06:22.208345 | orchestrator | 15:06:22.208 STDOUT terraform:  + stop_before_destroy = false 2025-03-26 15:06:22.208382 | orchestrator | 15:06:22.208 STDOUT terraform:  + updated = (known after apply) 2025-03-26 15:06:22.208434 | orchestrator | 15:06:22.208 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-03-26 15:06:22.208451 | orchestrator | 15:06:22.208 STDOUT terraform:  + block_device { 2025-03-26 15:06:22.208476 | orchestrator | 15:06:22.208 STDOUT terraform:  + boot_index = 0 2025-03-26 15:06:22.208506 | orchestrator | 15:06:22.208 STDOUT terraform:  + delete_on_termination = false 2025-03-26 15:06:22.208535 | orchestrator | 15:06:22.208 STDOUT terraform:  + destination_type = "volume" 2025-03-26 15:06:22.208565 | orchestrator | 15:06:22.208 STDOUT terraform:  + multiattach = false 2025-03-26 15:06:22.208597 | orchestrator | 15:06:22.208 STDOUT terraform:  + source_type = "volume" 2025-03-26 15:06:22.208637 | orchestrator | 15:06:22.208 STDOUT terraform:  + uuid = (known after apply) 2025-03-26 15:06:22.208644 | orchestrator | 15:06:22.208 STDOUT terraform:  } 2025-03-26 15:06:22.208662 | orchestrator | 15:06:22.208 STDOUT terraform:  + network { 2025-03-26 15:06:22.208683 | orchestrator | 15:06:22.208 STDOUT terraform:  + access_network = false 2025-03-26 15:06:22.208715 | orchestrator | 15:06:22.208 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-03-26 15:06:22.208747 | orchestrator | 15:06:22.208 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-03-26 15:06:22.208779 | orchestrator | 15:06:22.208 STDOUT terraform:  + mac = (known after apply) 2025-03-26 15:06:22.208812 | orchestrator | 15:06:22.208 STDOUT terraform:  + name = (known after apply) 2025-03-26 15:06:22.208846 | orchestrator | 15:06:22.208 STDOUT terraform:  + port = (known after apply) 2025-03-26 15:06:22.208877 | orchestrator | 15:06:22.208 STDOUT terraform:  + uuid = (known after apply) 2025-03-26 15:06:22.208888 | orchestrator | 15:06:22.208 STDOUT terraform:  } 2025-03-26 15:06:22.208895 | orchestrator | 15:06:22.208 STDOUT terraform:  } 2025-03-26 15:06:22.208950 | orchestrator | 15:06:22.208 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-03-26 15:06:22.208994 | orchestrator | 15:06:22.208 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-03-26 15:06:22.209031 | orchestrator | 15:06:22.208 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-03-26 15:06:22.209067 | orchestrator | 15:06:22.209 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-03-26 15:06:22.209103 | orchestrator | 15:06:22.209 STDOUT terraform:  + all_metadata = (known after apply) 2025-03-26 15:06:22.209139 | orchestrator | 15:06:22.209 STDOUT terraform:  + all_tags = (known after apply) 2025-03-26 15:06:22.209165 | orchestrator | 15:06:22.209 STDOUT terraform:  + availability_zone = "nova" 2025-03-26 15:06:22.209186 | orchestrator | 15:06:22.209 STDOUT terraform:  + config_drive = true 2025-03-26 15:06:22.209222 | orchestrator | 15:06:22.209 STDOUT terraform:  + created = (known after apply) 2025-03-26 15:06:22.209258 | orchestrator | 15:06:22.209 STDOUT terraform:  + flavor_id = (known after apply) 2025-03-26 15:06:22.209291 | orchestrator | 15:06:22.209 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-03-26 15:06:22.209312 | orchestrator | 15:06:22.209 STDOUT terraform:  + force_delete = false 2025-03-26 15:06:22.209351 | orchestrator | 15:06:22.209 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.209388 | orchestrator | 15:06:22.209 STDOUT terraform:  + image_id = (known after apply) 2025-03-26 15:06:22.209423 | orchestrator | 15:06:22.209 STDOUT terraform:  + image_name = (known after apply) 2025-03-26 15:06:22.209449 | orchestrator | 15:06:22.209 STDOUT terraform:  + key_pair = "testbed" 2025-03-26 15:06:22.209482 | orchestrator | 15:06:22.209 STDOUT terraform:  + name = "testbed-node-1" 2025-03-26 15:06:22.209506 | orchestrator | 15:06:22.209 STDOUT terraform:  + power_state = "active" 2025-03-26 15:06:22.209543 | orchestrator | 15:06:22.209 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.209578 | orchestrator | 15:06:22.209 STDOUT terraform:  + security_groups = (known after apply) 2025-03-26 15:06:22.209602 | orchestrator | 15:06:22.209 STDOUT terraform:  + stop_before_destroy = false 2025-03-26 15:06:22.209639 | orchestrator | 15:06:22.209 STDOUT terraform:  + updated = (known after apply) 2025-03-26 15:06:22.209689 | orchestrator | 15:06:22.209 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-03-26 15:06:22.209708 | orchestrator | 15:06:22.209 STDOUT terraform:  + block_device { 2025-03-26 15:06:22.209733 | orchestrator | 15:06:22.209 STDOUT terraform:  + boot_index = 0 2025-03-26 15:06:22.209761 | orchestrator | 15:06:22.209 STDOUT terraform:  + delete_on_termination = false 2025-03-26 15:06:22.209792 | orchestrator | 15:06:22.209 STDOUT terraform:  + destination_type = "volume" 2025-03-26 15:06:22.209821 | orchestrator | 15:06:22.209 STDOUT terraform:  + multiattach = false 2025-03-26 15:06:22.209886 | orchestrator | 15:06:22.209 STDOUT terraform:  + source_type = "volume" 2025-03-26 15:06:22.209912 | orchestrator | 15:06:22.209 STDOUT terraform:  + uuid = (known after apply) 2025-03-26 15:06:22.209919 | orchestrator | 15:06:22.209 STDOUT terraform:  } 2025-03-26 15:06:22.209954 | orchestrator | 15:06:22.209 STDOUT terraform:  + network { 2025-03-26 15:06:22.209976 | orchestrator | 15:06:22.209 STDOUT terraform:  + access_network = false 2025-03-26 15:06:22.210008 | orchestrator | 15:06:22.209 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-03-26 15:06:22.210050 | orchestrator | 15:06:22.210 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-03-26 15:06:22.210091 | orchestrator | 15:06:22.210 STDOUT terraform:  + mac = (known after apply) 2025-03-26 15:06:22.210121 | orchestrator | 15:06:22.210 STDOUT terraform:  + name = (known after apply) 2025-03-26 15:06:22.210155 | orchestrator | 15:06:22.210 STDOUT terraform:  + port = (known after apply) 2025-03-26 15:06:22.210188 | orchestrator | 15:06:22.210 STDOUT terraform:  + uuid = (known after apply) 2025-03-26 15:06:22.210196 | orchestrator | 15:06:22.210 STDOUT terraform:  } 2025-03-26 15:06:22.210202 | orchestrator | 15:06:22.210 STDOUT terraform:  } 2025-03-26 15:06:22.210253 | orchestrator | 15:06:22.210 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-03-26 15:06:22.210295 | orchestrator | 15:06:22.210 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-03-26 15:06:22.210332 | orchestrator | 15:06:22.210 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-03-26 15:06:22.210368 | orchestrator | 15:06:22.210 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-03-26 15:06:22.210406 | orchestrator | 15:06:22.210 STDOUT terraform:  + all_metadata = (known after apply) 2025-03-26 15:06:22.210440 | orchestrator | 15:06:22.210 STDOUT terraform:  + all_tags = (known after apply) 2025-03-26 15:06:22.210466 | orchestrator | 15:06:22.210 STDOUT terraform:  + availability_zone = "nova" 2025-03-26 15:06:22.210485 | orchestrator | 15:06:22.210 STDOUT terraform:  + config_drive = true 2025-03-26 15:06:22.210522 | orchestrator | 15:06:22.210 STDOUT terraform:  + created = (known after apply) 2025-03-26 15:06:22.210558 | orchestrator | 15:06:22.210 STDOUT terraform:  + flavor_id = (known after apply) 2025-03-26 15:06:22.210588 | orchestrator | 15:06:22.210 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-03-26 15:06:22.210612 | orchestrator | 15:06:22.210 STDOUT terraform:  + force_delete = false 2025-03-26 15:06:22.210649 | orchestrator | 15:06:22.210 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.210685 | orchestrator | 15:06:22.210 STDOUT terraform:  + image_id = (known after apply) 2025-03-26 15:06:22.210721 | orchestrator | 15:06:22.210 STDOUT terraform:  + image_name = (known after apply) 2025-03-26 15:06:22.210747 | orchestrator | 15:06:22.210 STDOUT terraform:  + key_pair = "testbed" 2025-03-26 15:06:22.210779 | orchestrator | 15:06:22.210 STDOUT terraform:  + name = "testbed-node-2" 2025-03-26 15:06:22.210976 | orchestrator | 15:06:22.210 STDOUT terraform:  + power_state = "active" 2025-03-26 15:06:22.211014 | orchestrator | 15:06:22.210 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.211052 | orchestrator | 15:06:22.211 STDOUT terraform:  + security_groups = (known after apply) 2025-03-26 15:06:22.211076 | orchestrator | 15:06:22.211 STDOUT terraform:  + stop_before_destroy = false 2025-03-26 15:06:22.211118 | orchestrator | 15:06:22.211 STDOUT terraform:  + updated = (known after apply) 2025-03-26 15:06:22.211170 | orchestrator | 15:06:22.211 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-03-26 15:06:22.211178 | orchestrator | 15:06:22.211 STDOUT terraform:  + block_device { 2025-03-26 15:06:22.211211 | orchestrator | 15:06:22.211 STDOUT terraform:  + boot_index = 0 2025-03-26 15:06:22.211242 | orchestrator | 15:06:22.211 STDOUT terraform:  + delete_on_termination = false 2025-03-26 15:06:22.211274 | orchestrator | 15:06:22.211 STDOUT terraform:  + destination_type = "volume" 2025-03-26 15:06:22.211302 | orchestrator | 15:06:22.211 STDOUT terraform:  + multiattach = false 2025-03-26 15:06:22.211334 | orchestrator | 15:06:22.211 STDOUT terraform:  + source_type = "volume" 2025-03-26 15:06:22.211374 | orchestrator | 15:06:22.211 STDOUT terraform:  + uuid = (known after apply) 2025-03-26 15:06:22.211381 | orchestrator | 15:06:22.211 STDOUT terraform:  } 2025-03-26 15:06:22.211398 | orchestrator | 15:06:22.211 STDOUT terraform:  + network { 2025-03-26 15:06:22.211416 | orchestrator | 15:06:22.211 STDOUT terraform:  + access_network = false 2025-03-26 15:06:22.211473 | orchestrator | 15:06:22.211 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-03-26 15:06:22.211506 | orchestrator | 15:06:22.211 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-03-26 15:06:22.211514 | orchestrator | 15:06:22.211 STDOUT terraform:  + mac = (known after apply) 2025-03-26 15:06:22.211539 | orchestrator | 15:06:22.211 STDOUT terraform:  + name = (known after apply) 2025-03-26 15:06:22.211572 | orchestrator | 15:06:22.211 STDOUT terraform:  + port = (known after apply) 2025-03-26 15:06:22.211604 | orchestrator | 15:06:22.211 STDOUT terraform:  + uuid = (known after apply) 2025-03-26 15:06:22.211611 | orchestrator | 15:06:22.211 STDOUT terraform:  } 2025-03-26 15:06:22.211618 | orchestrator | 15:06:22.211 STDOUT terraform:  } 2025-03-26 15:06:22.211668 | orchestrator | 15:06:22.211 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-03-26 15:06:22.211712 | orchestrator | 15:06:22.211 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-03-26 15:06:22.211750 | orchestrator | 15:06:22.211 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-03-26 15:06:22.211787 | orchestrator | 15:06:22.211 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-03-26 15:06:22.211824 | orchestrator | 15:06:22.211 STDOUT terraform:  + all_metadata = (known after apply) 2025-03-26 15:06:22.211862 | orchestrator | 15:06:22.211 STDOUT terraform:  + all_tags = (known after apply) 2025-03-26 15:06:22.211888 | orchestrator | 15:06:22.211 STDOUT terraform:  + availability_zone = "nova" 2025-03-26 15:06:22.211895 | orchestrator | 15:06:22.211 STDOUT terraform:  + config_drive = true 2025-03-26 15:06:22.211953 | orchestrator | 15:06:22.211 STDOUT terraform:  + created = (known after apply) 2025-03-26 15:06:22.211989 | orchestrator | 15:06:22.211 STDOUT terraform:  + flavor_id = (known after apply) 2025-03-26 15:06:22.212022 | orchestrator | 15:06:22.211 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-03-26 15:06:22.212040 | orchestrator | 15:06:22.212 STDOUT terraform:  + force_delete = false 2025-03-26 15:06:22.212076 | orchestrator | 15:06:22.212 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.212116 | orchestrator | 15:06:22.212 STDOUT terraform:  + image_id = (known after apply) 2025-03-26 15:06:22.212151 | orchestrator | 15:06:22.212 STDOUT terraform:  + image_name = (known after apply) 2025-03-26 15:06:22.212177 | orchestrator | 15:06:22.212 STDOUT terraform:  + key_pair = "testbed" 2025-03-26 15:06:22.212209 | orchestrator | 15:06:22.212 STDOUT terraform:  + name = "testbed-node-3" 2025-03-26 15:06:22.212233 | orchestrator | 15:06:22.212 STDOUT terraform:  + power_state = "active" 2025-03-26 15:06:22.212269 | orchestrator | 15:06:22.212 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.212306 | orchestrator | 15:06:22.212 STDOUT terraform:  + security_groups = (known after apply) 2025-03-26 15:06:22.212330 | orchestrator | 15:06:22.212 STDOUT terraform:  + stop_before_destroy = false 2025-03-26 15:06:22.212366 | orchestrator | 15:06:22.212 STDOUT terraform:  + updated = (known after apply) 2025-03-26 15:06:22.212420 | orchestrator | 15:06:22.212 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-03-26 15:06:22.212427 | orchestrator | 15:06:22.212 STDOUT terraform:  + block_device { 2025-03-26 15:06:22.212460 | orchestrator | 15:06:22.212 STDOUT terraform:  + boot_index = 0 2025-03-26 15:06:22.212488 | orchestrator | 15:06:22.212 STDOUT terraform:  + delete_on_termination = false 2025-03-26 15:06:22.212519 | orchestrator | 15:06:22.212 STDOUT terraform:  + destination_type = "volume" 2025-03-26 15:06:22.212547 | orchestrator | 15:06:22.212 STDOUT terraform:  + multiattach = false 2025-03-26 15:06:22.212579 | orchestrator | 15:06:22.212 STDOUT terraform:  + source_type = "volume" 2025-03-26 15:06:22.212619 | orchestrator | 15:06:22.212 STDOUT terraform:  + uuid = (known after apply) 2025-03-26 15:06:22.212627 | orchestrator | 15:06:22.212 STDOUT terraform:  } 2025-03-26 15:06:22.212644 | orchestrator | 15:06:22.212 STDOUT terraform:  + network { 2025-03-26 15:06:22.212662 | orchestrator | 15:06:22.212 STDOUT terraform:  + access_network = false 2025-03-26 15:06:22.212694 | orchestrator | 15:06:22.212 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-03-26 15:06:22.212727 | orchestrator | 15:06:22.212 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-03-26 15:06:22.212758 | orchestrator | 15:06:22.212 STDOUT terraform:  + mac = (known after apply) 2025-03-26 15:06:22.212792 | orchestrator | 15:06:22.212 STDOUT terraform:  + name = (known after apply) 2025-03-26 15:06:22.212826 | orchestrator | 15:06:22.212 STDOUT terraform:  + port = (known after apply) 2025-03-26 15:06:22.212860 | orchestrator | 15:06:22.212 STDOUT terraform:  + uuid = (known after apply) 2025-03-26 15:06:22.212867 | orchestrator | 15:06:22.212 STDOUT terraform:  } 2025-03-26 15:06:22.212874 | orchestrator | 15:06:22.212 STDOUT terraform:  } 2025-03-26 15:06:22.212933 | orchestrator | 15:06:22.212 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-03-26 15:06:22.212977 | orchestrator | 15:06:22.212 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-03-26 15:06:22.213012 | orchestrator | 15:06:22.212 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-03-26 15:06:22.213049 | orchestrator | 15:06:22.213 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-03-26 15:06:22.213084 | orchestrator | 15:06:22.213 STDOUT terraform:  + all_metadata = (known after apply) 2025-03-26 15:06:22.213119 | orchestrator | 15:06:22.213 STDOUT terraform:  + all_tags = (known after apply) 2025-03-26 15:06:22.213144 | orchestrator | 15:06:22.213 STDOUT terraform:  + availability_zone = "nova" 2025-03-26 15:06:22.213163 | orchestrator | 15:06:22.213 STDOUT terraform:  + config_drive = true 2025-03-26 15:06:22.213200 | orchestrator | 15:06:22.213 STDOUT terraform:  + created = (known after apply) 2025-03-26 15:06:22.213237 | orchestrator | 15:06:22.213 STDOUT terraform:  + flavor_id = (known after apply) 2025-03-26 15:06:22.213267 | orchestrator | 15:06:22.213 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-03-26 15:06:22.213291 | orchestrator | 15:06:22.213 STDOUT terraform:  + force_delete = false 2025-03-26 15:06:22.213328 | orchestrator | 15:06:22.213 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.213365 | orchestrator | 15:06:22.213 STDOUT terraform:  + image_id = (known after apply) 2025-03-26 15:06:22.213400 | orchestrator | 15:06:22.213 STDOUT terraform:  + image_name = (known after apply) 2025-03-26 15:06:22.213427 | orchestrator | 15:06:22.213 STDOUT terraform:  + key_pair = "testbed" 2025-03-26 15:06:22.213460 | orchestrator | 15:06:22.213 STDOUT terraform:  + name = "testbed-node-4" 2025-03-26 15:06:22.213486 | orchestrator | 15:06:22.213 STDOUT terraform:  + power_state = "active" 2025-03-26 15:06:22.213522 | orchestrator | 15:06:22.213 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.213558 | orchestrator | 15:06:22.213 STDOUT terraform:  + security_groups = (known after apply) 2025-03-26 15:06:22.213584 | orchestrator | 15:06:22.213 STDOUT terraform:  + stop_before_destroy = false 2025-03-26 15:06:22.213622 | orchestrator | 15:06:22.213 STDOUT terraform:  + updated = (known after apply) 2025-03-26 15:06:22.213673 | orchestrator | 15:06:22.213 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-03-26 15:06:22.213681 | orchestrator | 15:06:22.213 STDOUT terraform:  + block_device { 2025-03-26 15:06:22.213712 | orchestrator | 15:06:22.213 STDOUT terraform:  + boot_index = 0 2025-03-26 15:06:22.213740 | orchestrator | 15:06:22.213 STDOUT terraform:  + delete_on_termination = false 2025-03-26 15:06:22.213771 | orchestrator | 15:06:22.213 STDOUT terraform:  + destination_type = "volume" 2025-03-26 15:06:22.213800 | orchestrator | 15:06:22.213 STDOUT terraform:  + multiattach = false 2025-03-26 15:06:22.213831 | orchestrator | 15:06:22.213 STDOUT terraform:  + source_type = "volume" 2025-03-26 15:06:22.213871 | orchestrator | 15:06:22.213 STDOUT terraform:  + uuid = (known after apply) 2025-03-26 15:06:22.213879 | orchestrator | 15:06:22.213 STDOUT terraform:  } 2025-03-26 15:06:22.213885 | orchestrator | 15:06:22.213 STDOUT terraform:  + network { 2025-03-26 15:06:22.213912 | orchestrator | 15:06:22.213 STDOUT terraform:  + access_network = false 2025-03-26 15:06:22.213954 | orchestrator | 15:06:22.213 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-03-26 15:06:22.213984 | orchestrator | 15:06:22.213 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-03-26 15:06:22.214031 | orchestrator | 15:06:22.213 STDOUT terraform:  + mac = (known after apply) 2025-03-26 15:06:22.214068 | orchestrator | 15:06:22.214 STDOUT terraform:  + name = (known after apply) 2025-03-26 15:06:22.214101 | orchestrator | 15:06:22.214 STDOUT terraform:  + port = (known after apply) 2025-03-26 15:06:22.214133 | orchestrator | 15:06:22.214 STDOUT terraform:  + uuid = (known after apply) 2025-03-26 15:06:22.214140 | orchestrator | 15:06:22.214 STDOUT terraform:  } 2025-03-26 15:06:22.214147 | orchestrator | 15:06:22.214 STDOUT terraform:  } 2025-03-26 15:06:22.214198 | orchestrator | 15:06:22.214 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-03-26 15:06:22.214242 | orchestrator | 15:06:22.214 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-03-26 15:06:22.214279 | orchestrator | 15:06:22.214 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-03-26 15:06:22.214315 | orchestrator | 15:06:22.214 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-03-26 15:06:22.214352 | orchestrator | 15:06:22.214 STDOUT terraform:  + all_metadata = (known after apply) 2025-03-26 15:06:22.214390 | orchestrator | 15:06:22.214 STDOUT terraform:  + all_tags = (known after apply) 2025-03-26 15:06:22.214423 | orchestrator | 15:06:22.214 STDOUT terraform:  + availability_zone = "nova" 2025-03-26 15:06:22.214442 | orchestrator | 15:06:22.214 STDOUT terraform:  + config_drive = true 2025-03-26 15:06:22.214478 | orchestrator | 15:06:22.214 STDOUT terraform:  + created = (known after apply) 2025-03-26 15:06:22.214515 | orchestrator | 15:06:22.214 STDOUT terraform:  + flavor_id = (known after apply) 2025-03-26 15:06:22.214548 | orchestrator | 15:06:22.214 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-03-26 15:06:22.214566 | orchestrator | 15:06:22.214 STDOUT terraform:  + force_delete = false 2025-03-26 15:06:22.214604 | orchestrator | 15:06:22.214 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.214640 | orchestrator | 15:06:22.214 STDOUT terraform:  + image_id = (known after apply) 2025-03-26 15:06:22.214677 | orchestrator | 15:06:22.214 STDOUT terraform:  + image_name = (known after apply) 2025-03-26 15:06:22.214703 | orchestrator | 15:06:22.214 STDOUT terraform:  + key_pair = "testbed" 2025-03-26 15:06:22.214734 | orchestrator | 15:06:22.214 STDOUT terraform:  + name = "testbed-node-5" 2025-03-26 15:06:22.214760 | orchestrator | 15:06:22.214 STDOUT terraform:  + power_state = "active" 2025-03-26 15:06:22.214796 | orchestrator | 15:06:22.214 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.214832 | orchestrator | 15:06:22.214 STDOUT terraform:  + security_groups = (known after apply) 2025-03-26 15:06:22.214856 | orchestrator | 15:06:22.214 STDOUT terraform:  + stop_before_destroy = false 2025-03-26 15:06:22.214892 | orchestrator | 15:06:22.214 STDOUT terraform:  + updated = (known after apply) 2025-03-26 15:06:22.215064 | orchestrator | 15:06:22.214 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-03-26 15:06:22.215141 | orchestrator | 15:06:22.214 STDOUT terraform:  + block_device { 2025-03-26 15:06:22.215160 | orchestrator | 15:06:22.214 STDOUT terraform:  + boot_index = 0 2025-03-26 15:06:22.215174 | orchestrator | 15:06:22.214 STDOUT terraform:  + delete_on_termination = false 2025-03-26 15:06:22.215187 | orchestrator | 15:06:22.215 STDOUT terraform:  + destination_type = "volume" 2025-03-26 15:06:22.215205 | orchestrator | 15:06:22.215 STDOUT terraform:  + multiattach = false 2025-03-26 15:06:22.215218 | orchestrator | 15:06:22.215 STDOUT terraform:  + source_type = "volume" 2025-03-26 15:06:22.215231 | orchestrator | 15:06:22.215 STDOUT terraform:  + uuid = (known after apply) 2025-03-26 15:06:22.215244 | orchestrator | 15:06:22.215 STDOUT terraform:  } 2025-03-26 15:06:22.215256 | orchestrator | 15:06:22.215 STDOUT terraform:  + network { 2025-03-26 15:06:22.215269 | orchestrator | 15:06:22.215 STDOUT terraform:  + access_network = false 2025-03-26 15:06:22.215282 | orchestrator | 15:06:22.215 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-03-26 15:06:22.215298 | orchestrator | 15:06:22.215 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-03-26 15:06:22.215341 | orchestrator | 15:06:22.215 STDOUT terraform:  + mac = (known after apply) 2025-03-26 15:06:22.215355 | orchestrator | 15:06:22.215 STDOUT terraform:  + name = (known after apply) 2025-03-26 15:06:22.215368 | orchestrator | 15:06:22.215 STDOUT terraform:  + port = (known after apply) 2025-03-26 15:06:22.215384 | orchestrator | 15:06:22.215 STDOUT terraform:  + uuid = (known after apply) 2025-03-26 15:06:22.215428 | orchestrator | 15:06:22.215 STDOUT terraform:  } 2025-03-26 15:06:22.215442 | orchestrator | 15:06:22.215 STDOUT terraform:  } 2025-03-26 15:06:22.215466 | orchestrator | 15:06:22.215 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-03-26 15:06:22.215483 | orchestrator | 15:06:22.215 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-03-26 15:06:22.215521 | orchestrator | 15:06:22.215 STDOUT terraform:  + fingerprint = (known after apply) 2025-03-26 15:06:22.215535 | orchestrator | 15:06:22.215 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.215564 | orchestrator | 15:06:22.215 STDOUT terraform:  + name = "testbed" 2025-03-26 15:06:22.215582 | orchestrator | 15:06:22.215 STDOUT terraform:  + private_key = (sensitive value) 2025-03-26 15:06:22.215595 | orchestrator | 15:06:22.215 STDOUT terraform:  + public_key = (known after apply) 2025-03-26 15:06:22.215608 | orchestrator | 15:06:22.215 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.215620 | orchestrator | 15:06:22.215 STDOUT terraform:  + user_id = (known after apply) 2025-03-26 15:06:22.215636 | orchestrator | 15:06:22.215 STDOUT terraform:  } 2025-03-26 15:06:22.215703 | orchestrator | 15:06:22.215 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-03-26 15:06:22.215722 | orchestrator | 15:06:22.215 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-26 15:06:22.215767 | orchestrator | 15:06:22.215 STDOUT terraform:  + device = (known after apply) 2025-03-26 15:06:22.215784 | orchestrator | 15:06:22.215 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.215798 | orchestrator | 15:06:22.215 STDOUT terraform:  + instance_id = (known after apply) 2025-03-26 15:06:22.215813 | orchestrator | 15:06:22.215 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.215830 | orchestrator | 15:06:22.215 STDOUT terraform:  + volume_id = (known after apply) 2025-03-26 15:06:22.215895 | orchestrator | 15:06:22.215 STDOUT terraform:  } 2025-03-26 15:06:22.215913 | orchestrator | 15:06:22.215 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-03-26 15:06:22.215954 | orchestrator | 15:06:22.215 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-26 15:06:22.215971 | orchestrator | 15:06:22.215 STDOUT terraform:  + device = (known after apply) 2025-03-26 15:06:22.216009 | orchestrator | 15:06:22.215 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.216027 | orchestrator | 15:06:22.215 STDOUT terraform:  + instance_id = (known after apply) 2025-03-26 15:06:22.216063 | orchestrator | 15:06:22.216 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.216086 | orchestrator | 15:06:22.216 STDOUT terraform:  + volume_id = (known after apply) 2025-03-26 15:06:22.216148 | orchestrator | 15:06:22.216 STDOUT terraform:  } 2025-03-26 15:06:22.216165 | orchestrator | 15:06:22.216 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-03-26 15:06:22.216181 | orchestrator | 15:06:22.216 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-26 15:06:22.216225 | orchestrator | 15:06:22.216 STDOUT terraform:  + device = (known after apply) 2025-03-26 15:06:22.216242 | orchestrator | 15:06:22.216 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.216279 | orchestrator | 15:06:22.216 STDOUT terraform:  + instance_id = (known after apply) 2025-03-26 15:06:22.216295 | orchestrator | 15:06:22.216 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.216341 | orchestrator | 15:06:22.216 STDOUT terraform:  + volume_id = (known after apply) 2025-03-26 15:06:22.216384 | orchestrator | 15:06:22.216 STDOUT terraform:  } 2025-03-26 15:06:22.216402 | orchestrator | 15:06:22.216 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-03-26 15:06:22.216449 | orchestrator | 15:06:22.216 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-26 15:06:22.216493 | orchestrator | 15:06:22.216 STDOUT terraform:  + device = (known after apply) 2025-03-26 15:06:22.216510 | orchestrator | 15:06:22.216 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.216543 | orchestrator | 15:06:22.216 STDOUT terraform:  + instance_id = (known after apply) 2025-03-26 15:06:22.216561 | orchestrator | 15:06:22.216 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.216574 | orchestrator | 15:06:22.216 STDOUT terraform:  + volume_id = (known after apply) 2025-03-26 15:06:22.216589 | orchestrator | 15:06:22.216 STDOUT terraform:  } 2025-03-26 15:06:22.216635 | orchestrator | 15:06:22.216 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-03-26 15:06:22.216684 | orchestrator | 15:06:22.216 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-26 15:06:22.216701 | orchestrator | 15:06:22.216 STDOUT terraform:  + device = (known after apply) 2025-03-26 15:06:22.216737 | orchestrator | 15:06:22.216 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.216754 | orchestrator | 15:06:22.216 STDOUT terraform:  + instance_id = (known after apply) 2025-03-26 15:06:22.216771 | orchestrator | 15:06:22.216 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.216807 | orchestrator | 15:06:22.216 STDOUT terraform:  + volume_id = (known after apply) 2025-03-26 15:06:22.221706 | orchestrator | 15:06:22.216 STDOUT terraform:  } 2025-03-26 15:06:22.221813 | orchestrator | 15:06:22.216 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-03-26 15:06:22.221835 | orchestrator | 15:06:22.216 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-26 15:06:22.221851 | orchestrator | 15:06:22.216 STDOUT terraform:  + device = (known after apply) 2025-03-26 15:06:22.221865 | orchestrator | 15:06:22.216 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.221879 | orchestrator | 15:06:22.216 STDOUT terraform:  + instance_id = (known after apply) 2025-03-26 15:06:22.221891 | orchestrator | 15:06:22.216 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.221904 | orchestrator | 15:06:22.217 STDOUT terraform:  + volume_id = (known after apply) 2025-03-26 15:06:22.221917 | orchestrator | 15:06:22.217 STDOUT terraform:  } 2025-03-26 15:06:22.221947 | orchestrator | 15:06:22.217 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-03-26 15:06:22.221973 | orchestrator | 15:06:22.217 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-26 15:06:22.221986 | orchestrator | 15:06:22.217 STDOUT terraform:  + device = (known after apply) 2025-03-26 15:06:22.221999 | orchestrator | 15:06:22.217 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.222049 | orchestrator | 15:06:22.217 STDOUT terraform:  + instance_id = (known after apply) 2025-03-26 15:06:22.222082 | orchestrator | 15:06:22.217 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.222096 | orchestrator | 15:06:22.217 STDOUT terraform:  + volume_id = (known after apply) 2025-03-26 15:06:22.222110 | orchestrator | 15:06:22.217 STDOUT terraform:  } 2025-03-26 15:06:22.222124 | orchestrator | 15:06:22.217 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-03-26 15:06:22.222138 | orchestrator | 15:06:22.217 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-26 15:06:22.222151 | orchestrator | 15:06:22.217 STDOUT terraform:  + device = (known after apply) 2025-03-26 15:06:22.222165 | orchestrator | 15:06:22.217 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.222179 | orchestrator | 15:06:22.217 STDOUT terraform:  + instance_id = (known after apply) 2025-03-26 15:06:22.222197 | orchestrator | 15:06:22.217 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.222212 | orchestrator | 15:06:22.217 STDOUT terraform:  + volume_id = (known after apply) 2025-03-26 15:06:22.222225 | orchestrator | 15:06:22.217 STDOUT terraform:  } 2025-03-26 15:06:22.222240 | orchestrator | 15:06:22.217 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-03-26 15:06:22.222254 | orchestrator | 15:06:22.217 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-26 15:06:22.222267 | orchestrator | 15:06:22.217 STDOUT terraform:  + device = (known after apply) 2025-03-26 15:06:22.222281 | orchestrator | 15:06:22.217 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.222295 | orchestrator | 15:06:22.217 STDOUT terraform:  + instance_id = (known after apply) 2025-03-26 15:06:22.222308 | orchestrator | 15:06:22.217 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.222322 | orchestrator | 15:06:22.217 STDOUT terraform:  + volume_id = (known after apply) 2025-03-26 15:06:22.222336 | orchestrator | 15:06:22.217 STDOUT terraform:  } 2025-03-26 15:06:22.222349 | orchestrator | 15:06:22.217 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[9] will be created 2025-03-26 15:06:22.222363 | orchestrator | 15:06:22.217 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-26 15:06:22.222377 | orchestrator | 15:06:22.217 STDOUT terraform:  + device = (known after apply) 2025-03-26 15:06:22.222391 | orchestrator | 15:06:22.217 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.222413 | orchestrator | 15:06:22.217 STDOUT terraform:  + instance_id = (known after apply) 2025-03-26 15:06:22.222427 | orchestrator | 15:06:22.217 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.222440 | orchestrator | 15:06:22.217 STDOUT terraform:  + volume_id = (known after apply) 2025-03-26 15:06:22.222454 | orchestrator | 15:06:22.217 STDOUT terraform:  } 2025-03-26 15:06:22.222467 | orchestrator | 15:06:22.217 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[10] will be created 2025-03-26 15:06:22.222481 | orchestrator | 15:06:22.218 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-26 15:06:22.222500 | orchestrator | 15:06:22.218 STDOUT terraform:  + device = (known after apply) 2025-03-26 15:06:22.222514 | orchestrator | 15:06:22.218 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.222527 | orchestrator | 15:06:22.218 STDOUT terraform:  + instance_id = (known after apply) 2025-03-26 15:06:22.222541 | orchestrator | 15:06:22.218 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.222554 | orchestrator | 15:06:22.218 STDOUT terraform:  + volume_id = (known after apply) 2025-03-26 15:06:22.222568 | orchestrator | 15:06:22.218 STDOUT terraform:  } 2025-03-26 15:06:22.222581 | orchestrator | 15:06:22.218 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[11] will be created 2025-03-26 15:06:22.222595 | orchestrator | 15:06:22.218 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-26 15:06:22.222608 | orchestrator | 15:06:22.218 STDOUT terraform:  + device = (known after apply) 2025-03-26 15:06:22.222622 | orchestrator | 15:06:22.218 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.222635 | orchestrator | 15:06:22.218 STDOUT terraform:  + instance_id = (known after apply) 2025-03-26 15:06:22.222649 | orchestrator | 15:06:22.218 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.222662 | orchestrator | 15:06:22.218 STDOUT terraform:  + volume_id = (known after apply) 2025-03-26 15:06:22.222676 | orchestrator | 15:06:22.218 STDOUT terraform:  } 2025-03-26 15:06:22.222689 | orchestrator | 15:06:22.218 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[12] will be created 2025-03-26 15:06:22.222702 | orchestrator | 15:06:22.218 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-26 15:06:22.222715 | orchestrator | 15:06:22.218 STDOUT terraform:  + device = (known after apply) 2025-03-26 15:06:22.222729 | orchestrator | 15:06:22.218 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.222743 | orchestrator | 15:06:22.218 STDOUT terraform:  + instance_id = (known after apply) 2025-03-26 15:06:22.222756 | orchestrator | 15:06:22.218 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.222770 | orchestrator | 15:06:22.218 STDOUT terraform:  + volume_id = (known after apply) 2025-03-26 15:06:22.222783 | orchestrator | 15:06:22.218 STDOUT terraform:  } 2025-03-26 15:06:22.222796 | orchestrator | 15:06:22.218 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[13] will be created 2025-03-26 15:06:22.222810 | orchestrator | 15:06:22.218 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-26 15:06:22.222823 | orchestrator | 15:06:22.218 STDOUT terraform:  + device = (known after apply) 2025-03-26 15:06:22.222844 | orchestrator | 15:06:22.218 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.222858 | orchestrator | 15:06:22.218 STDOUT terraform:  + instance_id = (known after apply) 2025-03-26 15:06:22.222871 | orchestrator | 15:06:22.218 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.222884 | orchestrator | 15:06:22.218 STDOUT terraform:  + volume_id = (known after apply) 2025-03-26 15:06:22.222903 | orchestrator | 15:06:22.218 STDOUT terraform:  } 2025-03-26 15:06:22.222917 | orchestrator | 15:06:22.218 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[14] will be created 2025-03-26 15:06:22.222994 | orchestrator | 15:06:22.218 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-26 15:06:22.223011 | orchestrator | 15:06:22.218 STDOUT terraform:  + device = (known after apply) 2025-03-26 15:06:22.223024 | orchestrator | 15:06:22.219 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.223037 | orchestrator | 15:06:22.219 STDOUT terraform:  + instance_id = (known after apply) 2025-03-26 15:06:22.223050 | orchestrator | 15:06:22.219 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.223063 | orchestrator | 15:06:22.219 STDOUT terraform:  + volume_id = (known after apply) 2025-03-26 15:06:22.223077 | orchestrator | 15:06:22.219 STDOUT terraform:  } 2025-03-26 15:06:22.223090 | orchestrator | 15:06:22.219 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[15] will be created 2025-03-26 15:06:22.223104 | orchestrator | 15:06:22.219 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-26 15:06:22.223117 | orchestrator | 15:06:22.219 STDOUT terraform:  + device = (known after apply) 2025-03-26 15:06:22.223130 | orchestrator | 15:06:22.219 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.223144 | orchestrator | 15:06:22.219 STDOUT terraform:  + instance_id = (known after apply) 2025-03-26 15:06:22.223157 | orchestrator | 15:06:22.219 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.223170 | orchestrator | 15:06:22.219 STDOUT terraform:  + volume_id = (known after apply) 2025-03-26 15:06:22.223183 | orchestrator | 15:06:22.219 STDOUT terraform:  } 2025-03-26 15:06:22.223197 | orchestrator | 15:06:22.219 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[16] will be created 2025-03-26 15:06:22.223211 | orchestrator | 15:06:22.219 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-26 15:06:22.223224 | orchestrator | 15:06:22.219 STDOUT terraform:  + device = (known after apply) 2025-03-26 15:06:22.223238 | orchestrator | 15:06:22.219 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.223251 | orchestrator | 15:06:22.219 STDOUT terraform:  + instance_id = (known after apply) 2025-03-26 15:06:22.223264 | orchestrator | 15:06:22.219 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.223278 | orchestrator | 15:06:22.219 STDOUT terraform:  + volume_id = (known after apply) 2025-03-26 15:06:22.223291 | orchestrator | 15:06:22.219 STDOUT terraform:  } 2025-03-26 15:06:22.223305 | orchestrator | 15:06:22.219 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[17] will be created 2025-03-26 15:06:22.223319 | orchestrator | 15:06:22.219 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-03-26 15:06:22.223332 | orchestrator | 15:06:22.219 STDOUT terraform:  + device = (known after apply) 2025-03-26 15:06:22.223345 | orchestrator | 15:06:22.219 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.223365 | orchestrator | 15:06:22.219 STDOUT terraform:  + instance_id = (known after apply) 2025-03-26 15:06:22.223380 | orchestrator | 15:06:22.219 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.223393 | orchestrator | 15:06:22.219 STDOUT terraform:  + volume_id = (known after apply) 2025-03-26 15:06:22.223406 | orchestrator | 15:06:22.219 STDOUT terraform:  } 2025-03-26 15:06:22.223417 | orchestrator | 15:06:22.219 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-03-26 15:06:22.223428 | orchestrator | 15:06:22.219 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-03-26 15:06:22.223439 | orchestrator | 15:06:22.219 STDOUT terraform:  + fixed_ip = (known after apply) 2025-03-26 15:06:22.223450 | orchestrator | 15:06:22.219 STDOUT terraform:  + floating_ip = (known after apply) 2025-03-26 15:06:22.223461 | orchestrator | 15:06:22.219 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.223472 | orchestrator | 15:06:22.219 STDOUT terraform:  + port_id = (known after apply) 2025-03-26 15:06:22.223491 | orchestrator | 15:06:22.220 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.223502 | orchestrator | 15:06:22.220 STDOUT terraform:  } 2025-03-26 15:06:22.223514 | orchestrator | 15:06:22.220 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-03-26 15:06:22.223525 | orchestrator | 15:06:22.220 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-03-26 15:06:22.223536 | orchestrator | 15:06:22.220 STDOUT terraform:  + address = (known after apply) 2025-03-26 15:06:22.223547 | orchestrator | 15:06:22.220 STDOUT terraform:  + all_tags = (known after apply) 2025-03-26 15:06:22.223558 | orchestrator | 15:06:22.220 STDOUT terraform:  + dns_domain = (known after apply) 2025-03-26 15:06:22.223570 | orchestrator | 15:06:22.220 STDOUT terraform:  + dns_name = (known after apply) 2025-03-26 15:06:22.223580 | orchestrator | 15:06:22.220 STDOUT terraform:  + fixed_ip = (known after apply) 2025-03-26 15:06:22.223591 | orchestrator | 15:06:22.220 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.223602 | orchestrator | 15:06:22.220 STDOUT terraform:  + pool = "public" 2025-03-26 15:06:22.223614 | orchestrator | 15:06:22.220 STDOUT terraform:  + port_id = (known after apply) 2025-03-26 15:06:22.223625 | orchestrator | 15:06:22.220 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.223636 | orchestrator | 15:06:22.220 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-26 15:06:22.223647 | orchestrator | 15:06:22.220 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-26 15:06:22.223657 | orchestrator | 15:06:22.220 STDOUT terraform:  } 2025-03-26 15:06:22.223669 | orchestrator | 15:06:22.220 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-03-26 15:06:22.223680 | orchestrator | 15:06:22.220 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-03-26 15:06:22.223691 | orchestrator | 15:06:22.220 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-26 15:06:22.223702 | orchestrator | 15:06:22.220 STDOUT terraform:  + all_tags = (known after apply) 2025-03-26 15:06:22.223717 | orchestrator | 15:06:22.220 STDOUT terraform:  + availability_zone_hints = [ 2025-03-26 15:06:22.223728 | orchestrator | 15:06:22.220 STDOUT terraform:  + "nova", 2025-03-26 15:06:22.223739 | orchestrator | 15:06:22.220 STDOUT terraform:  ] 2025-03-26 15:06:22.223750 | orchestrator | 15:06:22.220 STDOUT terraform:  + dns_domain = (known after apply) 2025-03-26 15:06:22.223761 | orchestrator | 15:06:22.220 STDOUT terraform:  + external = (known after apply) 2025-03-26 15:06:22.223772 | orchestrator | 15:06:22.220 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.223783 | orchestrator | 15:06:22.220 STDOUT terraform:  + mtu = (known after apply) 2025-03-26 15:06:22.223794 | orchestrator | 15:06:22.220 STDOUT terraform:  + name = "net-testbed-management" 2025-03-26 15:06:22.223805 | orchestrator | 15:06:22.220 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-26 15:06:22.223816 | orchestrator | 15:06:22.220 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-26 15:06:22.223826 | orchestrator | 15:06:22.220 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.223837 | orchestrator | 15:06:22.220 STDOUT terraform:  + shared = (known after apply) 2025-03-26 15:06:22.223848 | orchestrator | 15:06:22.220 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-26 15:06:22.223859 | orchestrator | 15:06:22.220 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-03-26 15:06:22.223870 | orchestrator | 15:06:22.220 STDOUT terraform:  + segments (known after apply) 2025-03-26 15:06:22.223881 | orchestrator | 15:06:22.221 STDOUT terraform:  } 2025-03-26 15:06:22.223892 | orchestrator | 15:06:22.221 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-03-26 15:06:22.223908 | orchestrator | 15:06:22.221 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-03-26 15:06:22.223919 | orchestrator | 15:06:22.221 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-26 15:06:22.223942 | orchestrator | 15:06:22.221 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-03-26 15:06:22.223954 | orchestrator | 15:06:22.221 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-03-26 15:06:22.223964 | orchestrator | 15:06:22.221 STDOUT terraform:  + all_tags = (known after apply) 2025-03-26 15:06:22.223975 | orchestrator | 15:06:22.221 STDOUT terraform:  + device_id = (known after apply) 2025-03-26 15:06:22.223986 | orchestrator | 15:06:22.221 STDOUT terraform:  + device_owner = (known after apply) 2025-03-26 15:06:22.223997 | orchestrator | 15:06:22.221 STDOUT terraform:  + dns_assignment = (known after apply) 2025-03-26 15:06:22.224008 | orchestrator | 15:06:22.221 STDOUT terraform:  + dns_name = (known after apply) 2025-03-26 15:06:22.224018 | orchestrator | 15:06:22.221 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.224029 | orchestrator | 15:06:22.221 STDOUT terraform:  + mac_address = (known after apply) 2025-03-26 15:06:22.224044 | orchestrator | 15:06:22.221 STDOUT terraform:  + network_id = (known after apply) 2025-03-26 15:06:22.224059 | orchestrator | 15:06:22.221 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-26 15:06:22.224070 | orchestrator | 15:06:22.221 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-26 15:06:22.224081 | orchestrator | 15:06:22.221 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.224092 | orchestrator | 15:06:22.221 STDOUT terraform:  + security_group_ids = (known after apply) 2025-03-26 15:06:22.224102 | orchestrator | 15:06:22.221 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-26 15:06:22.224113 | orchestrator | 15:06:22.221 STDOUT terraform:  + allowed_address_pairs { 2025-03-26 15:06:22.224124 | orchestrator | 15:06:22.221 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-03-26 15:06:22.224135 | orchestrator | 15:06:22.221 STDOUT terraform:  } 2025-03-26 15:06:22.224146 | orchestrator | 15:06:22.221 STDOUT terraform:  + allowed_address_pairs { 2025-03-26 15:06:22.224157 | orchestrator | 15:06:22.221 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-03-26 15:06:22.224168 | orchestrator | 15:06:22.221 STDOUT terraform:  } 2025-03-26 15:06:22.224179 | orchestrator | 15:06:22.221 STDOUT terraform:  + binding (known after apply) 2025-03-26 15:06:22.224190 | orchestrator | 15:06:22.221 STDOUT terraform:  + fixed_ip { 2025-03-26 15:06:22.224201 | orchestrator | 15:06:22.221 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-03-26 15:06:22.224212 | orchestrator | 15:06:22.221 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-26 15:06:22.224223 | orchestrator | 15:06:22.221 STDOUT terraform:  } 2025-03-26 15:06:22.224234 | orchestrator | 15:06:22.221 STDOUT terraform:  } 2025-03-26 15:06:22.224245 | orchestrator | 15:06:22.221 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-03-26 15:06:22.224256 | orchestrator | 15:06:22.221 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-03-26 15:06:22.224267 | orchestrator | 15:06:22.221 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-26 15:06:22.224278 | orchestrator | 15:06:22.221 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-03-26 15:06:22.224289 | orchestrator | 15:06:22.221 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-03-26 15:06:22.224299 | orchestrator | 15:06:22.222 STDOUT terraform:  + all_tags = (known after apply) 2025-03-26 15:06:22.224310 | orchestrator | 15:06:22.222 STDOUT terraform:  + device_id = (known after apply) 2025-03-26 15:06:22.224321 | orchestrator | 15:06:22.222 STDOUT terraform:  + device_owner = (known after apply) 2025-03-26 15:06:22.224336 | orchestrator | 15:06:22.222 STDOUT terraform:  + dns_assignment = (known after apply) 2025-03-26 15:06:22.224347 | orchestrator | 15:06:22.222 STDOUT terraform:  + dns_name = (known after apply) 2025-03-26 15:06:22.224358 | orchestrator | 15:06:22.222 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.224369 | orchestrator | 15:06:22.222 STDOUT terraform:  + mac_address = (known after apply) 2025-03-26 15:06:22.224380 | orchestrator | 15:06:22.222 STDOUT terraform:  + network_id = (known after apply) 2025-03-26 15:06:22.224396 | orchestrator | 15:06:22.222 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-26 15:06:22.224407 | orchestrator | 15:06:22.222 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-26 15:06:22.224418 | orchestrator | 15:06:22.222 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.224429 | orchestrator | 15:06:22.222 STDOUT terraform:  + security_group_ids = (known after apply) 2025-03-26 15:06:22.224440 | orchestrator | 15:06:22.222 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-26 15:06:22.224451 | orchestrator | 15:06:22.222 STDOUT terraform:  + allowed_address_pairs { 2025-03-26 15:06:22.224462 | orchestrator | 15:06:22.222 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-03-26 15:06:22.224473 | orchestrator | 15:06:22.222 STDOUT terraform:  } 2025-03-26 15:06:22.224484 | orchestrator | 15:06:22.222 STDOUT terraform:  + allowed_address_pairs { 2025-03-26 15:06:22.224495 | orchestrator | 15:06:22.222 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-03-26 15:06:22.224506 | orchestrator | 15:06:22.222 STDOUT terraform:  } 2025-03-26 15:06:22.224517 | orchestrator | 15:06:22.222 STDOUT terraform:  + allowed_address_pairs { 2025-03-26 15:06:22.224528 | orchestrator | 15:06:22.222 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-03-26 15:06:22.224539 | orchestrator | 15:06:22.222 STDOUT terraform:  } 2025-03-26 15:06:22.224550 | orchestrator | 15:06:22.222 STDOUT terraform:  + allowed_address_pairs { 2025-03-26 15:06:22.224561 | orchestrator | 15:06:22.222 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-03-26 15:06:22.224573 | orchestrator | 15:06:22.222 STDOUT terraform:  } 2025-03-26 15:06:22.224591 | orchestrator | 15:06:22.222 STDOUT terraform:  + binding (known after apply) 2025-03-26 15:06:22.224602 | orchestrator | 15:06:22.222 STDOUT terraform:  + fixed_ip { 2025-03-26 15:06:22.224613 | orchestrator | 15:06:22.222 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-03-26 15:06:22.224624 | orchestrator | 15:06:22.222 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-26 15:06:22.224635 | orchestrator | 15:06:22.222 STDOUT terraform:  } 2025-03-26 15:06:22.224646 | orchestrator | 15:06:22.222 STDOUT terraform:  } 2025-03-26 15:06:22.224657 | orchestrator | 15:06:22.222 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-03-26 15:06:22.224669 | orchestrator | 15:06:22.222 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-03-26 15:06:22.224680 | orchestrator | 15:06:22.222 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-26 15:06:22.224690 | orchestrator | 15:06:22.222 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-03-26 15:06:22.224701 | orchestrator | 15:06:22.222 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-03-26 15:06:22.224715 | orchestrator | 15:06:22.222 STDOUT terraform:  + all_tags = (known after apply) 2025-03-26 15:06:22.224726 | orchestrator | 15:06:22.223 STDOUT terraform:  + device_id = (known after apply) 2025-03-26 15:06:22.224737 | orchestrator | 15:06:22.223 STDOUT terraform:  + device_owner = (known after apply) 2025-03-26 15:06:22.224828 | orchestrator | 15:06:22.223 STDOUT terraform:  + dns_assignment = (known after apply) 2025-03-26 15:06:22.224839 | orchestrator | 15:06:22.223 STDOUT terraform:  + dns_name = (known after apply) 2025-03-26 15:06:22.224850 | orchestrator | 15:06:22.223 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.224867 | orchestrator | 15:06:22.223 STDOUT terraform:  + mac_address = (known after apply) 2025-03-26 15:06:22.224878 | orchestrator | 15:06:22.223 STDOUT terraform:  + network_id = (known after apply) 2025-03-26 15:06:22.224889 | orchestrator | 15:06:22.223 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-26 15:06:22.224900 | orchestrator | 15:06:22.223 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-26 15:06:22.224911 | orchestrator | 15:06:22.223 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.224922 | orchestrator | 15:06:22.223 STDOUT terraform:  + security_group_ids = (known after apply) 2025-03-26 15:06:22.224944 | orchestrator | 15:06:22.223 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-26 15:06:22.224956 | orchestrator | 15:06:22.223 STDOUT terraform:  + allowed_address_pairs { 2025-03-26 15:06:22.224967 | orchestrator | 15:06:22.223 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-03-26 15:06:22.224978 | orchestrator | 15:06:22.223 STDOUT terraform:  } 2025-03-26 15:06:22.224989 | orchestrator | 15:06:22.223 STDOUT terraform:  + allowed_address_pairs { 2025-03-26 15:06:22.225000 | orchestrator | 15:06:22.223 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-03-26 15:06:22.225011 | orchestrator | 15:06:22.223 STDOUT terraform:  } 2025-03-26 15:06:22.225022 | orchestrator | 15:06:22.223 STDOUT terraform:  + allowed_address_pairs { 2025-03-26 15:06:22.225033 | orchestrator | 15:06:22.223 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-03-26 15:06:22.225044 | orchestrator | 15:06:22.223 STDOUT terraform:  } 2025-03-26 15:06:22.225055 | orchestrator | 15:06:22.223 STDOUT terraform:  + allowed_address_pairs { 2025-03-26 15:06:22.225066 | orchestrator | 15:06:22.223 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-03-26 15:06:22.225077 | orchestrator | 15:06:22.223 STDOUT terraform:  } 2025-03-26 15:06:22.225088 | orchestrator | 15:06:22.223 STDOUT terraform:  + binding (known after apply) 2025-03-26 15:06:22.225099 | orchestrator | 15:06:22.223 STDOUT terraform:  + fixed_ip { 2025-03-26 15:06:22.225110 | orchestrator | 15:06:22.223 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-03-26 15:06:22.225121 | orchestrator | 15:06:22.223 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-26 15:06:22.225132 | orchestrator | 15:06:22.223 STDOUT terraform:  } 2025-03-26 15:06:22.225143 | orchestrator | 15:06:22.223 STDOUT terraform:  } 2025-03-26 15:06:22.225154 | orchestrator | 15:06:22.223 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-03-26 15:06:22.225165 | orchestrator | 15:06:22.223 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-03-26 15:06:22.225181 | orchestrator | 15:06:22.223 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-26 15:06:22.225192 | orchestrator | 15:06:22.223 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-03-26 15:06:22.225203 | orchestrator | 15:06:22.223 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-03-26 15:06:22.225214 | orchestrator | 15:06:22.223 STDOUT terraform:  + all_tags = (known after apply) 2025-03-26 15:06:22.225225 | orchestrator | 15:06:22.223 STDOUT terraform:  + device_id = (known after apply) 2025-03-26 15:06:22.225236 | orchestrator | 15:06:22.223 STDOUT terraform:  + device_owner = (known after apply) 2025-03-26 15:06:22.225247 | orchestrator | 15:06:22.224 STDOUT terraform:  + dns_assignment = (known after apply) 2025-03-26 15:06:22.225258 | orchestrator | 15:06:22.224 STDOUT terraform:  + dns_name = (known after apply) 2025-03-26 15:06:22.225271 | orchestrator | 15:06:22.224 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.225282 | orchestrator | 15:06:22.224 STDOUT terraform:  + mac_address = (known after apply) 2025-03-26 15:06:22.225293 | orchestrator | 15:06:22.224 STDOUT terraform:  + network_id = (known after apply) 2025-03-26 15:06:22.225305 | orchestrator | 15:06:22.224 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-26 15:06:22.225324 | orchestrator | 15:06:22.224 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-26 15:06:22.225336 | orchestrator | 15:06:22.224 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.225347 | orchestrator | 15:06:22.224 STDOUT terraform:  + security_group_ids = (known after apply) 2025-03-26 15:06:22.225358 | orchestrator | 15:06:22.224 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-26 15:06:22.225369 | orchestrator | 15:06:22.224 STDOUT terraform:  + allowed_address_pairs { 2025-03-26 15:06:22.225380 | orchestrator | 15:06:22.224 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-03-26 15:06:22.225391 | orchestrator | 15:06:22.224 STDOUT terraform:  } 2025-03-26 15:06:22.225402 | orchestrator | 15:06:22.224 STDOUT terraform:  + allowed_address_pairs { 2025-03-26 15:06:22.225413 | orchestrator | 15:06:22.224 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-03-26 15:06:22.225424 | orchestrator | 15:06:22.224 STDOUT terraform:  } 2025-03-26 15:06:22.225436 | orchestrator | 15:06:22.224 STDOUT terraform:  + allowed_address_pairs { 2025-03-26 15:06:22.225447 | orchestrator | 15:06:22.224 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-03-26 15:06:22.225459 | orchestrator | 15:06:22.224 STDOUT terraform:  } 2025-03-26 15:06:22.225471 | orchestrator | 15:06:22.224 STDOUT terraform:  + allowed_address_pairs { 2025-03-26 15:06:22.225482 | orchestrator | 15:06:22.224 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-03-26 15:06:22.225493 | orchestrator | 15:06:22.224 STDOUT terraform:  } 2025-03-26 15:06:22.225504 | orchestrator | 15:06:22.224 STDOUT terraform:  + binding (known after apply) 2025-03-26 15:06:22.225515 | orchestrator | 15:06:22.224 STDOUT terraform:  + fixed_ip { 2025-03-26 15:06:22.225527 | orchestrator | 15:06:22.224 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-03-26 15:06:22.225542 | orchestrator | 15:06:22.224 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-26 15:06:22.225553 | orchestrator | 15:06:22.224 STDOUT terraform:  } 2025-03-26 15:06:22.225565 | orchestrator | 15:06:22.224 STDOUT terraform:  } 2025-03-26 15:06:22.225576 | orchestrator | 15:06:22.224 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-03-26 15:06:22.225587 | orchestrator | 15:06:22.224 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-03-26 15:06:22.225598 | orchestrator | 15:06:22.224 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-26 15:06:22.225609 | orchestrator | 15:06:22.224 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-03-26 15:06:22.225620 | orchestrator | 15:06:22.224 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-03-26 15:06:22.225631 | orchestrator | 15:06:22.224 STDOUT terraform:  + all_tags = (known after apply) 2025-03-26 15:06:22.225642 | orchestrator | 15:06:22.224 STDOUT terraform:  + device_id = (known after apply) 2025-03-26 15:06:22.225653 | orchestrator | 15:06:22.224 STDOUT terraform:  + device_owner = (known after apply) 2025-03-26 15:06:22.225664 | orchestrator | 15:06:22.224 STDOUT terraform:  + dns_assignment = (known after apply) 2025-03-26 15:06:22.225675 | orchestrator | 15:06:22.224 STDOUT terraform:  + dns_name = (known after apply) 2025-03-26 15:06:22.225686 | orchestrator | 15:06:22.224 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.225697 | orchestrator | 15:06:22.225 STDOUT terraform:  + mac_address = (known after apply) 2025-03-26 15:06:22.225708 | orchestrator | 15:06:22.225 STDOUT terraform:  + network_id = (known after apply) 2025-03-26 15:06:22.225719 | orchestrator | 15:06:22.225 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-26 15:06:22.225730 | orchestrator | 15:06:22.225 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-26 15:06:22.225741 | orchestrator | 15:06:22.225 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.225760 | orchestrator | 15:06:22.225 STDOUT terraform:  + security_group_ids = (known after apply) 2025-03-26 15:06:22.225808 | orchestrator | 15:06:22.225 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-26 15:06:22.225821 | orchestrator | 15:06:22.225 STDOUT terraform:  + allowed_address_pairs { 2025-03-26 15:06:22.225832 | orchestrator | 15:06:22.225 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-03-26 15:06:22.225843 | orchestrator | 15:06:22.225 STDOUT terraform:  } 2025-03-26 15:06:22.225855 | orchestrator | 15:06:22.225 STDOUT terraform:  + allowed_address_pairs { 2025-03-26 15:06:22.225866 | orchestrator | 15:06:22.225 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-03-26 15:06:22.225877 | orchestrator | 15:06:22.225 STDOUT terraform:  } 2025-03-26 15:06:22.225893 | orchestrator | 15:06:22.225 STDOUT terraform:  + allowed_address_pairs { 2025-03-26 15:06:22.225904 | orchestrator | 15:06:22.225 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-03-26 15:06:22.225915 | orchestrator | 15:06:22.225 STDOUT terraform:  } 2025-03-26 15:06:22.225963 | orchestrator | 15:06:22.225 STDOUT terraform:  + allowed_address_pairs { 2025-03-26 15:06:22.225976 | orchestrator | 15:06:22.225 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-03-26 15:06:22.225987 | orchestrator | 15:06:22.225 STDOUT terraform:  } 2025-03-26 15:06:22.225998 | orchestrator | 15:06:22.225 STDOUT terraform:  + binding (known after apply) 2025-03-26 15:06:22.226009 | orchestrator | 15:06:22.225 STDOUT terraform:  + fixed_ip { 2025-03-26 15:06:22.226042 | orchestrator | 15:06:22.225 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-03-26 15:06:22.226054 | orchestrator | 15:06:22.225 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-26 15:06:22.226065 | orchestrator | 15:06:22.225 STDOUT terraform:  } 2025-03-26 15:06:22.226076 | orchestrator | 15:06:22.225 STDOUT terraform:  } 2025-03-26 15:06:22.226087 | orchestrator | 15:06:22.225 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-03-26 15:06:22.226098 | orchestrator | 15:06:22.225 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-03-26 15:06:22.226109 | orchestrator | 15:06:22.225 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-26 15:06:22.226120 | orchestrator | 15:06:22.225 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-03-26 15:06:22.226131 | orchestrator | 15:06:22.225 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-03-26 15:06:22.226146 | orchestrator | 15:06:22.225 STDOUT terraform:  + all_tags = (known after apply) 2025-03-26 15:06:22.226214 | orchestrator | 15:06:22.225 STDOUT terraform:  + device_id = (known after apply) 2025-03-26 15:06:22.226228 | orchestrator | 15:06:22.225 STDOUT terraform:  + device_owner = (known after apply) 2025-03-26 15:06:22.226239 | orchestrator | 15:06:22.225 STDOUT terraform:  + dns_assignment = (known after apply) 2025-03-26 15:06:22.226277 | orchestrator | 15:06:22.225 STDOUT terraform:  + dns_name = (known after apply) 2025-03-26 15:06:22.226288 | orchestrator | 15:06:22.225 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.226297 | orchestrator | 15:06:22.225 STDOUT terraform:  + mac_address = (known after apply) 2025-03-26 15:06:22.226306 | orchestrator | 15:06:22.225 STDOUT terraform:  + network_id = (known after apply) 2025-03-26 15:06:22.226316 | orchestrator | 15:06:22.226 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-26 15:06:22.226325 | orchestrator | 15:06:22.226 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-26 15:06:22.226335 | orchestrator | 15:06:22.226 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.226348 | orchestrator | 15:06:22.226 STDOUT terraform:  + security_group_ids = (known after apply) 2025-03-26 15:06:22.226373 | orchestrator | 15:06:22.226 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-26 15:06:22.226383 | orchestrator | 15:06:22.226 STDOUT terraform:  + allowed_address_pairs { 2025-03-26 15:06:22.226393 | orchestrator | 15:06:22.226 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-03-26 15:06:22.226405 | orchestrator | 15:06:22.226 STDOUT terraform:  } 2025-03-26 15:06:22.226424 | orchestrator | 15:06:22.226 STDOUT terraform:  + allowed_address_pairs { 2025-03-26 15:06:22.226434 | orchestrator | 15:06:22.226 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-03-26 15:06:22.226443 | orchestrator | 15:06:22.226 STDOUT terraform:  } 2025-03-26 15:06:22.226453 | orchestrator | 15:06:22.226 STDOUT terraform:  + allowed_address_pairs { 2025-03-26 15:06:22.226465 | orchestrator | 15:06:22.226 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-03-26 15:06:22.226491 | orchestrator | 15:06:22.226 STDOUT terraform:  } 2025-03-26 15:06:22.226502 | orchestrator | 15:06:22.226 STDOUT terraform:  + allowed_address_pairs { 2025-03-26 15:06:22.226511 | orchestrator | 15:06:22.226 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-03-26 15:06:22.226520 | orchestrator | 15:06:22.226 STDOUT terraform:  } 2025-03-26 15:06:22.226530 | orchestrator | 15:06:22.226 STDOUT terraform:  + binding (known after apply) 2025-03-26 15:06:22.226539 | orchestrator | 15:06:22.226 STDOUT terraform:  + fixed_ip { 2025-03-26 15:06:22.226551 | orchestrator | 15:06:22.226 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-03-26 15:06:22.226584 | orchestrator | 15:06:22.226 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-26 15:06:22.226595 | orchestrator | 15:06:22.226 STDOUT terraform:  } 2025-03-26 15:06:22.226604 | orchestrator | 15:06:22.226 STDOUT terraform:  } 2025-03-26 15:06:22.226617 | orchestrator | 15:06:22.226 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-03-26 15:06:22.226629 | orchestrator | 15:06:22.226 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-03-26 15:06:22.226679 | orchestrator | 15:06:22.226 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-26 15:06:22.226708 | orchestrator | 15:06:22.226 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-03-26 15:06:22.226745 | orchestrator | 15:06:22.226 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-03-26 15:06:22.226783 | orchestrator | 15:06:22.226 STDOUT terraform:  + all_tags = (known after apply) 2025-03-26 15:06:22.226820 | orchestrator | 15:06:22.226 STDOUT terraform:  + device_id = (known after apply) 2025-03-26 15:06:22.226860 | orchestrator | 15:06:22.226 STDOUT terraform:  + device_owner = (known after apply) 2025-03-26 15:06:22.226897 | orchestrator | 15:06:22.226 STDOUT terraform:  + dns_assignment = (known after apply) 2025-03-26 15:06:22.226948 | orchestrator | 15:06:22.226 STDOUT terraform:  + dns_name = (known after apply) 2025-03-26 15:06:22.226980 | orchestrator | 15:06:22.226 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.227018 | orchestrator | 15:06:22.226 STDOUT terraform:  + mac_address = (known after apply) 2025-03-26 15:06:22.227057 | orchestrator | 15:06:22.227 STDOUT terraform:  + network_id = (known after apply) 2025-03-26 15:06:22.227094 | orchestrator | 15:06:22.227 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-03-26 15:06:22.227132 | orchestrator | 15:06:22.227 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-03-26 15:06:22.227171 | orchestrator | 15:06:22.227 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.227207 | orchestrator | 15:06:22.227 STDOUT terraform:  + security_group_ids = (known after apply) 2025-03-26 15:06:22.227245 | orchestrator | 15:06:22.227 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-26 15:06:22.227258 | orchestrator | 15:06:22.227 STDOUT terraform:  + allowed_address_pairs { 2025-03-26 15:06:22.227291 | orchestrator | 15:06:22.227 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-03-26 15:06:22.227305 | orchestrator | 15:06:22.227 STDOUT terraform:  } 2025-03-26 15:06:22.227318 | orchestrator | 15:06:22.227 STDOUT terraform:  + allowed_address_pairs { 2025-03-26 15:06:22.227348 | orchestrator | 15:06:22.227 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-03-26 15:06:22.227362 | orchestrator | 15:06:22.227 STDOUT terraform:  } 2025-03-26 15:06:22.227374 | orchestrator | 15:06:22.227 STDOUT terraform:  + allowed_address_pairs { 2025-03-26 15:06:22.227404 | orchestrator | 15:06:22.227 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-03-26 15:06:22.227417 | orchestrator | 15:06:22.227 STDOUT terraform:  } 2025-03-26 15:06:22.227429 | orchestrator | 15:06:22.227 STDOUT terraform:  + allowed_address_pairs { 2025-03-26 15:06:22.227460 | orchestrator | 15:06:22.227 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-03-26 15:06:22.227473 | orchestrator | 15:06:22.227 STDOUT terraform:  } 2025-03-26 15:06:22.227486 | orchestrator | 15:06:22.227 STDOUT terraform:  + binding (known after apply) 2025-03-26 15:06:22.227499 | orchestrator | 15:06:22.227 STDOUT terraform:  + fixed_ip { 2025-03-26 15:06:22.227529 | orchestrator | 15:06:22.227 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-03-26 15:06:22.227566 | orchestrator | 15:06:22.227 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-26 15:06:22.227580 | orchestrator | 15:06:22.227 STDOUT terraform:  } 2025-03-26 15:06:22.227636 | orchestrator | 15:06:22.227 STDOUT terraform:  } 2025-03-26 15:06:22.227650 | orchestrator | 15:06:22.227 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-03-26 15:06:22.227687 | orchestrator | 15:06:22.227 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-03-26 15:06:22.227700 | orchestrator | 15:06:22.227 STDOUT terraform:  + force_destroy = false 2025-03-26 15:06:22.227729 | orchestrator | 15:06:22.227 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.227758 | orchestrator | 15:06:22.227 STDOUT terraform:  + port_id = (known after apply) 2025-03-26 15:06:22.227789 | orchestrator | 15:06:22.227 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.227820 | orchestrator | 15:06:22.227 STDOUT terraform:  + router_id = (known after apply) 2025-03-26 15:06:22.227849 | orchestrator | 15:06:22.227 STDOUT terraform:  + subnet_id = (known after apply) 2025-03-26 15:06:22.227861 | orchestrator | 15:06:22.227 STDOUT terraform:  } 2025-03-26 15:06:22.227897 | orchestrator | 15:06:22.227 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-03-26 15:06:22.227962 | orchestrator | 15:06:22.227 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-03-26 15:06:22.227982 | orchestrator | 15:06:22.227 STDOUT terraform:  + admin_state_up = (known after apply) 2025-03-26 15:06:22.228023 | orchestrator | 15:06:22.227 STDOUT terraform:  + all_tags = (known after apply) 2025-03-26 15:06:22.228050 | orchestrator | 15:06:22.228 STDOUT terraform:  + availability_zone_hints = [ 2025-03-26 15:06:22.228093 | orchestrator | 15:06:22.228 STDOUT terraform:  + "nova", 2025-03-26 15:06:22.228106 | orchestrator | 15:06:22.228 STDOUT terraform:  ] 2025-03-26 15:06:22.228134 | orchestrator | 15:06:22.228 STDOUT terraform:  + distributed = (known after apply) 2025-03-26 15:06:22.228173 | orchestrator | 15:06:22.228 STDOUT terraform:  + enable_snat = (known after apply) 2025-03-26 15:06:22.228224 | orchestrator | 15:06:22.228 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-03-26 15:06:22.228263 | orchestrator | 15:06:22.228 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.228295 | orchestrator | 15:06:22.228 STDOUT terraform:  + name = "testbed" 2025-03-26 15:06:22.228333 | orchestrator | 15:06:22.228 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.228373 | orchestrator | 15:06:22.228 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-26 15:06:22.228404 | orchestrator | 15:06:22.228 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-03-26 15:06:22.228415 | orchestrator | 15:06:22.228 STDOUT terraform:  } 2025-03-26 15:06:22.228468 | orchestrator | 15:06:22.228 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-03-26 15:06:22.228524 | orchestrator | 15:06:22.228 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-03-26 15:06:22.228536 | orchestrator | 15:06:22.228 STDOUT terraform:  + description = "ssh" 2025-03-26 15:06:22.228567 | orchestrator | 15:06:22.228 STDOUT terraform:  + direction = "ingress" 2025-03-26 15:06:22.228579 | orchestrator | 15:06:22.228 STDOUT terraform:  + ethertype = "IPv4" 2025-03-26 15:06:22.228618 | orchestrator | 15:06:22.228 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.228630 | orchestrator | 15:06:22.228 STDOUT terraform:  + port_range_max = 22 2025-03-26 15:06:22.228653 | orchestrator | 15:06:22.228 STDOUT terraform:  + port_range_min = 22 2025-03-26 15:06:22.228667 | orchestrator | 15:06:22.228 STDOUT terraform:  + protocol = "tcp" 2025-03-26 15:06:22.228706 | orchestrator | 15:06:22.228 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.228738 | orchestrator | 15:06:22.228 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-26 15:06:22.228763 | orchestrator | 15:06:22.228 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-03-26 15:06:22.228794 | orchestrator | 15:06:22.228 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-26 15:06:22.228826 | orchestrator | 15:06:22.228 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-26 15:06:22.228837 | orchestrator | 15:06:22.228 STDOUT terraform:  } 2025-03-26 15:06:22.228890 | orchestrator | 15:06:22.228 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-03-26 15:06:22.228955 | orchestrator | 15:06:22.228 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-03-26 15:06:22.228983 | orchestrator | 15:06:22.228 STDOUT terraform:  + description = "wireguard" 2025-03-26 15:06:22.229001 | orchestrator | 15:06:22.228 STDOUT terraform:  + direction = "ingress" 2025-03-26 15:06:22.229026 | orchestrator | 15:06:22.228 STDOUT terraform:  + ethertype = "IPv4" 2025-03-26 15:06:22.229057 | orchestrator | 15:06:22.229 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.229069 | orchestrator | 15:06:22.229 STDOUT terraform:  + port_range_max = 51820 2025-03-26 15:06:22.229096 | orchestrator | 15:06:22.229 STDOUT terraform:  + port_range_min = 51820 2025-03-26 15:06:22.229108 | orchestrator | 15:06:22.229 STDOUT terraform:  + protocol = "udp" 2025-03-26 15:06:22.229146 | orchestrator | 15:06:22.229 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.229178 | orchestrator | 15:06:22.229 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-26 15:06:22.229205 | orchestrator | 15:06:22.229 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-03-26 15:06:22.229234 | orchestrator | 15:06:22.229 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-26 15:06:22.229266 | orchestrator | 15:06:22.229 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-26 15:06:22.229278 | orchestrator | 15:06:22.229 STDOUT terraform:  } 2025-03-26 15:06:22.229331 | orchestrator | 15:06:22.229 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-03-26 15:06:22.229386 | orchestrator | 15:06:22.229 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-03-26 15:06:22.229412 | orchestrator | 15:06:22.229 STDOUT terraform:  + direction = "ingress" 2025-03-26 15:06:22.229423 | orchestrator | 15:06:22.229 STDOUT terraform:  + ethertype = "IPv4" 2025-03-26 15:06:22.229461 | orchestrator | 15:06:22.229 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.229473 | orchestrator | 15:06:22.229 STDOUT terraform:  + protocol = "tcp" 2025-03-26 15:06:22.229511 | orchestrator | 15:06:22.229 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.229542 | orchestrator | 15:06:22.229 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-26 15:06:22.229572 | orchestrator | 15:06:22.229 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-03-26 15:06:22.229605 | orchestrator | 15:06:22.229 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-26 15:06:22.229636 | orchestrator | 15:06:22.229 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-26 15:06:22.229648 | orchestrator | 15:06:22.229 STDOUT terraform:  } 2025-03-26 15:06:22.229702 | orchestrator | 15:06:22.229 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-03-26 15:06:22.229756 | orchestrator | 15:06:22.229 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-03-26 15:06:22.229772 | orchestrator | 15:06:22.229 STDOUT terraform:  + direction = "ingress" 2025-03-26 15:06:22.229799 | orchestrator | 15:06:22.229 STDOUT terraform:  + ethertype = "IPv4" 2025-03-26 15:06:22.229831 | orchestrator | 15:06:22.229 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.229843 | orchestrator | 15:06:22.229 STDOUT terraform:  + protocol = "udp" 2025-03-26 15:06:22.229881 | orchestrator | 15:06:22.229 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.229913 | orchestrator | 15:06:22.229 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-26 15:06:22.229954 | orchestrator | 15:06:22.229 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-03-26 15:06:22.229981 | orchestrator | 15:06:22.229 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-26 15:06:22.230029 | orchestrator | 15:06:22.229 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-26 15:06:22.234110 | orchestrator | 15:06:22.230 STDOUT terraform:  } 2025-03-26 15:06:22.234206 | orchestrator | 15:06:22.230 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-03-26 15:06:22.234222 | orchestrator | 15:06:22.230 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-03-26 15:06:22.234240 | orchestrator | 15:06:22.230 STDOUT terraform:  + direction = "ingress" 2025-03-26 15:06:22.234250 | orchestrator | 15:06:22.230 STDOUT terraform:  + ethertype = "IPv4" 2025-03-26 15:06:22.234260 | orchestrator | 15:06:22.230 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.234270 | orchestrator | 15:06:22.230 STDOUT terraform:  + protocol = "icmp" 2025-03-26 15:06:22.234280 | orchestrator | 15:06:22.230 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.234290 | orchestrator | 15:06:22.230 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-26 15:06:22.234299 | orchestrator | 15:06:22.230 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-03-26 15:06:22.234309 | orchestrator | 15:06:22.230 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-26 15:06:22.234318 | orchestrator | 15:06:22.230 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-26 15:06:22.234328 | orchestrator | 15:06:22.230 STDOUT terraform:  } 2025-03-26 15:06:22.234338 | orchestrator | 15:06:22.230 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-03-26 15:06:22.234348 | orchestrator | 15:06:22.230 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-03-26 15:06:22.234359 | orchestrator | 15:06:22.230 STDOUT terraform:  + direction = "ingress" 2025-03-26 15:06:22.234369 | orchestrator | 15:06:22.230 STDOUT terraform:  + ethertype = "IPv4" 2025-03-26 15:06:22.234378 | orchestrator | 15:06:22.230 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.234388 | orchestrator | 15:06:22.230 STDOUT terraform:  + protocol = "tcp" 2025-03-26 15:06:22.234398 | orchestrator | 15:06:22.230 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.234407 | orchestrator | 15:06:22.231 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-26 15:06:22.234429 | orchestrator | 15:06:22.231 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-03-26 15:06:22.234439 | orchestrator | 15:06:22.231 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-26 15:06:22.234449 | orchestrator | 15:06:22.231 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-26 15:06:22.234458 | orchestrator | 15:06:22.231 STDOUT terraform:  } 2025-03-26 15:06:22.234468 | orchestrator | 15:06:22.231 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-03-26 15:06:22.234478 | orchestrator | 15:06:22.231 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-03-26 15:06:22.234487 | orchestrator | 15:06:22.231 STDOUT terraform:  + direction = "ingress" 2025-03-26 15:06:22.234496 | orchestrator | 15:06:22.231 STDOUT terraform:  + ethertype = "IPv4" 2025-03-26 15:06:22.234506 | orchestrator | 15:06:22.231 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.234515 | orchestrator | 15:06:22.231 STDOUT terraform:  + protocol = "udp" 2025-03-26 15:06:22.234525 | orchestrator | 15:06:22.231 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.234534 | orchestrator | 15:06:22.231 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-26 15:06:22.234543 | orchestrator | 15:06:22.231 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-03-26 15:06:22.234553 | orchestrator | 15:06:22.231 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-26 15:06:22.234562 | orchestrator | 15:06:22.231 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-26 15:06:22.234571 | orchestrator | 15:06:22.231 STDOUT terraform:  } 2025-03-26 15:06:22.234587 | orchestrator | 15:06:22.231 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-03-26 15:06:22.234598 | orchestrator | 15:06:22.231 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-03-26 15:06:22.234607 | orchestrator | 15:06:22.231 STDOUT terraform:  + direction = "ingress" 2025-03-26 15:06:22.234617 | orchestrator | 15:06:22.231 STDOUT terraform:  + ethertype = "IPv4" 2025-03-26 15:06:22.234626 | orchestrator | 15:06:22.231 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.234636 | orchestrator | 15:06:22.231 STDOUT terraform:  + protocol = "icmp" 2025-03-26 15:06:22.234645 | orchestrator | 15:06:22.231 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.234654 | orchestrator | 15:06:22.231 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-26 15:06:22.234664 | orchestrator | 15:06:22.231 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-03-26 15:06:22.234673 | orchestrator | 15:06:22.232 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-26 15:06:22.234682 | orchestrator | 15:06:22.232 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-26 15:06:22.234692 | orchestrator | 15:06:22.232 STDOUT terraform:  } 2025-03-26 15:06:22.234701 | orchestrator | 15:06:22.232 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-03-26 15:06:22.234716 | orchestrator | 15:06:22.232 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-03-26 15:06:22.234726 | orchestrator | 15:06:22.232 STDOUT terraform:  + description = "vrrp" 2025-03-26 15:06:22.234735 | orchestrator | 15:06:22.232 STDOUT terraform:  + direction = "ingress" 2025-03-26 15:06:22.234745 | orchestrator | 15:06:22.232 STDOUT terraform:  + ethertype = "IPv4" 2025-03-26 15:06:22.234754 | orchestrator | 15:06:22.232 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.234764 | orchestrator | 15:06:22.232 STDOUT terraform:  + protocol = "112" 2025-03-26 15:06:22.234773 | orchestrator | 15:06:22.232 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.234786 | orchestrator | 15:06:22.232 STDOUT terraform:  + remote_group_id = (known after apply) 2025-03-26 15:06:22.234795 | orchestrator | 15:06:22.232 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-03-26 15:06:22.234805 | orchestrator | 15:06:22.232 STDOUT terraform:  + security_group_id = (known after apply) 2025-03-26 15:06:22.234814 | orchestrator | 15:06:22.232 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-26 15:06:22.234824 | orchestrator | 15:06:22.232 STDOUT terraform:  } 2025-03-26 15:06:22.234833 | orchestrator | 15:06:22.232 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-03-26 15:06:22.234843 | orchestrator | 15:06:22.232 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-03-26 15:06:22.234852 | orchestrator | 15:06:22.232 STDOUT terraform:  + all_tags = (known after apply) 2025-03-26 15:06:22.234862 | orchestrator | 15:06:22.232 STDOUT terraform:  + description = "management security group" 2025-03-26 15:06:22.234871 | orchestrator | 15:06:22.232 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.234881 | orchestrator | 15:06:22.232 STDOUT terraform:  + name = "testbed-management" 2025-03-26 15:06:22.234890 | orchestrator | 15:06:22.232 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.234900 | orchestrator | 15:06:22.232 STDOUT terraform:  + stateful = (known after apply) 2025-03-26 15:06:22.234909 | orchestrator | 15:06:22.232 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-26 15:06:22.234918 | orchestrator | 15:06:22.232 STDOUT terraform:  } 2025-03-26 15:06:22.234947 | orchestrator | 15:06:22.232 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-03-26 15:06:22.234957 | orchestrator | 15:06:22.232 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-03-26 15:06:22.234972 | orchestrator | 15:06:22.233 STDOUT terraform:  + all_tags = (known after apply) 2025-03-26 15:06:22.234983 | orchestrator | 15:06:22.233 STDOUT terraform:  + description = "node security group" 2025-03-26 15:06:22.234992 | orchestrator | 15:06:22.233 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.235002 | orchestrator | 15:06:22.233 STDOUT terraform:  + name = "testbed-node" 2025-03-26 15:06:22.235011 | orchestrator | 15:06:22.233 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.235021 | orchestrator | 15:06:22.233 STDOUT terraform:  + stateful = (known after apply) 2025-03-26 15:06:22.235035 | orchestrator | 15:06:22.233 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-26 15:06:22.235045 | orchestrator | 15:06:22.233 STDOUT terraform:  } 2025-03-26 15:06:22.235055 | orchestrator | 15:06:22.233 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-03-26 15:06:22.235064 | orchestrator | 15:06:22.233 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-03-26 15:06:22.235073 | orchestrator | 15:06:22.233 STDOUT terraform:  + all_tags = (known after apply) 2025-03-26 15:06:22.235083 | orchestrator | 15:06:22.233 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-03-26 15:06:22.235093 | orchestrator | 15:06:22.233 STDOUT terraform:  + dns_nameservers = [ 2025-03-26 15:06:22.235102 | orchestrator | 15:06:22.233 STDOUT terraform:  + "8.8.8.8", 2025-03-26 15:06:22.235112 | orchestrator | 15:06:22.233 STDOUT terraform:  + "9.9.9.9", 2025-03-26 15:06:22.235121 | orchestrator | 15:06:22.233 STDOUT terraform:  ] 2025-03-26 15:06:22.235131 | orchestrator | 15:06:22.233 STDOUT terraform:  + enable_dhcp = true 2025-03-26 15:06:22.235140 | orchestrator | 15:06:22.233 STDOUT terraform:  + gateway_ip = (known after apply) 2025-03-26 15:06:22.235150 | orchestrator | 15:06:22.233 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.235159 | orchestrator | 15:06:22.233 STDOUT terraform:  + ip_version = 4 2025-03-26 15:06:22.235169 | orchestrator | 15:06:22.233 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-03-26 15:06:22.235178 | orchestrator | 15:06:22.233 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-03-26 15:06:22.235187 | orchestrator | 15:06:22.233 STDOUT terraform:  + name = "subnet-testbed-management" 2025-03-26 15:06:22.235197 | orchestrator | 15:06:22.233 STDOUT terraform:  + network_id = (known after apply) 2025-03-26 15:06:22.235206 | orchestrator | 15:06:22.233 STDOUT terraform:  + no_gateway = false 2025-03-26 15:06:22.235216 | orchestrator | 15:06:22.233 STDOUT terraform:  + region = (known after apply) 2025-03-26 15:06:22.235225 | orchestrator | 15:06:22.233 STDOUT terraform:  + service_types = (known after apply) 2025-03-26 15:06:22.235234 | orchestrator | 15:06:22.233 STDOUT terraform:  + tenant_id = (known after apply) 2025-03-26 15:06:22.235243 | orchestrator | 15:06:22.233 STDOUT terraform:  + allocation_pool { 2025-03-26 15:06:22.235253 | orchestrator | 15:06:22.233 STDOUT terraform:  + end = "192.168.31.250" 2025-03-26 15:06:22.235263 | orchestrator | 15:06:22.234 STDOUT terraform:  + start = "192.168.31.200" 2025-03-26 15:06:22.235272 | orchestrator | 15:06:22.234 STDOUT terraform:  } 2025-03-26 15:06:22.235281 | orchestrator | 15:06:22.234 STDOUT terraform:  } 2025-03-26 15:06:22.235291 | orchestrator | 15:06:22.234 STDOUT terraform:  # terraform_data.image will be created 2025-03-26 15:06:22.235300 | orchestrator | 15:06:22.234 STDOUT terraform:  + resource "terraform_data" "image" { 2025-03-26 15:06:22.235310 | orchestrator | 15:06:22.234 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.235319 | orchestrator | 15:06:22.234 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-03-26 15:06:22.235332 | orchestrator | 15:06:22.234 STDOUT terraform:  + output = (known after apply) 2025-03-26 15:06:22.235342 | orchestrator | 15:06:22.234 STDOUT terraform:  } 2025-03-26 15:06:22.235351 | orchestrator | 15:06:22.234 STDOUT terraform:  # terraform_data.image_node will be created 2025-03-26 15:06:22.235364 | orchestrator | 15:06:22.234 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-03-26 15:06:22.459447 | orchestrator | 15:06:22.234 STDOUT terraform:  + id = (known after apply) 2025-03-26 15:06:22.459553 | orchestrator | 15:06:22.234 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-03-26 15:06:22.459573 | orchestrator | 15:06:22.234 STDOUT terraform:  + output = (known after apply) 2025-03-26 15:06:22.459587 | orchestrator | 15:06:22.234 STDOUT terraform:  } 2025-03-26 15:06:22.459601 | orchestrator | 15:06:22.234 STDOUT terraform: Plan: 82 to add, 0 to change, 0 to destroy. 2025-03-26 15:06:22.459614 | orchestrator | 15:06:22.234 STDOUT terraform: Changes to Outputs: 2025-03-26 15:06:22.459627 | orchestrator | 15:06:22.234 STDOUT terraform:  + manager_address = (sensitive value) 2025-03-26 15:06:22.459640 | orchestrator | 15:06:22.234 STDOUT terraform:  + private_key = (sensitive value) 2025-03-26 15:06:22.459668 | orchestrator | 15:06:22.458 STDOUT terraform: terraform_data.image_node: Creating... 2025-03-26 15:06:22.478142 | orchestrator | 15:06:22.458 STDOUT terraform: terraform_data.image: Creating... 2025-03-26 15:06:22.478189 | orchestrator | 15:06:22.458 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=35981d6e-addb-9197-4095-079d4b7d786d] 2025-03-26 15:06:22.478197 | orchestrator | 15:06:22.458 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=7872ad7e-c494-25fb-d878-64a86e57129c] 2025-03-26 15:06:22.478210 | orchestrator | 15:06:22.478 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-03-26 15:06:22.478876 | orchestrator | 15:06:22.478 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-03-26 15:06:22.478921 | orchestrator | 15:06:22.478 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-03-26 15:06:22.480135 | orchestrator | 15:06:22.480 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-03-26 15:06:22.483566 | orchestrator | 15:06:22.483 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-03-26 15:06:22.483592 | orchestrator | 15:06:22.483 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-03-26 15:06:22.483641 | orchestrator | 15:06:22.483 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Creating... 2025-03-26 15:06:22.483692 | orchestrator | 15:06:22.483 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-03-26 15:06:22.483745 | orchestrator | 15:06:22.483 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-03-26 15:06:22.484947 | orchestrator | 15:06:22.484 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Creating... 2025-03-26 15:06:22.935662 | orchestrator | 15:06:22.935 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-03-26 15:06:22.941023 | orchestrator | 15:06:22.940 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-03-26 15:06:22.942626 | orchestrator | 15:06:22.942 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Creating... 2025-03-26 15:06:22.947781 | orchestrator | 15:06:22.947 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-03-26 15:06:23.146381 | orchestrator | 15:06:23.146 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-03-26 15:06:23.154319 | orchestrator | 15:06:23.154 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Creating... 2025-03-26 15:06:28.323356 | orchestrator | 15:06:28.322 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=00f8efd5-4cdd-451b-b15a-315b6a783070] 2025-03-26 15:06:28.330558 | orchestrator | 15:06:28.330 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Creating... 2025-03-26 15:06:32.480247 | orchestrator | 15:06:32.479 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-03-26 15:06:32.480347 | orchestrator | 15:06:32.480 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-03-26 15:06:32.484271 | orchestrator | 15:06:32.484 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Still creating... [10s elapsed] 2025-03-26 15:06:32.484407 | orchestrator | 15:06:32.484 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-03-26 15:06:32.484556 | orchestrator | 15:06:32.484 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-03-26 15:06:32.485372 | orchestrator | 15:06:32.485 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Still creating... [10s elapsed] 2025-03-26 15:06:32.943227 | orchestrator | 15:06:32.942 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Still creating... [10s elapsed] 2025-03-26 15:06:32.948301 | orchestrator | 15:06:32.948 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-03-26 15:06:33.053568 | orchestrator | 15:06:33.053 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Creation complete after 11s [id=3084e68a-ebad-4437-bd94-82607113cb35] 2025-03-26 15:06:33.331541 | orchestrator | 15:06:33.062 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-03-26 15:06:38.333451 | orchestrator | 15:06:33.073 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 11s [id=a37eb2c8-f05e-43da-8c12-0830eb7c9d71] 2025-03-26 15:06:38.333581 | orchestrator | 15:06:33.079 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Creating... 2025-03-26 15:06:38.333598 | orchestrator | 15:06:33.089 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 11s [id=1f0ffd5a-0515-4e82-9e1a-64c4889ae37d] 2025-03-26 15:06:38.333609 | orchestrator | 15:06:33.096 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-03-26 15:06:38.333620 | orchestrator | 15:06:33.107 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 11s [id=018eeca1-6df5-40cc-92d0-1bd4b2f4c48a] 2025-03-26 15:06:38.333630 | orchestrator | 15:06:33.108 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Creation complete after 11s [id=461b6804-a72b-4d73-873c-350a148be214] 2025-03-26 15:06:38.333641 | orchestrator | 15:06:33.116 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 11s [id=e160b5d2-bb2c-4d46-9a19-0f90a2a200dd] 2025-03-26 15:06:38.334692 | orchestrator | 15:06:33.118 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-03-26 15:06:38.334709 | orchestrator | 15:06:33.119 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Creating... 2025-03-26 15:06:38.334720 | orchestrator | 15:06:33.125 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-03-26 15:06:38.334730 | orchestrator | 15:06:33.155 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Still creating... [10s elapsed] 2025-03-26 15:06:38.334742 | orchestrator | 15:06:33.164 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 10s [id=3983f5b4-5dfc-4eda-b37e-7ecf02ef52dc] 2025-03-26 15:06:38.334753 | orchestrator | 15:06:33.166 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Creation complete after 10s [id=538d5b73-1cd2-4d51-a71b-5a4f6cbc6cf8] 2025-03-26 15:06:38.334763 | orchestrator | 15:06:33.171 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Creating... 2025-03-26 15:06:38.334773 | orchestrator | 15:06:33.174 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Creating... 2025-03-26 15:06:38.334784 | orchestrator | 15:06:33.319 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Creation complete after 10s [id=d3112d8a-8c3b-4cce-9d8e-e42180f2448c] 2025-03-26 15:06:38.334795 | orchestrator | 15:06:33.329 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-03-26 15:06:38.334819 | orchestrator | 15:06:38.333 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Still creating... [10s elapsed] 2025-03-26 15:06:38.480759 | orchestrator | 15:06:38.480 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Creation complete after 10s [id=111b4eab-b6cc-4138-b8ce-a1d0ca0ede1d] 2025-03-26 15:06:38.489440 | orchestrator | 15:06:38.489 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-03-26 15:06:43.064011 | orchestrator | 15:06:43.063 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-03-26 15:06:43.080148 | orchestrator | 15:06:43.079 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Still creating... [10s elapsed] 2025-03-26 15:06:43.097340 | orchestrator | 15:06:43.097 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-03-26 15:06:43.119862 | orchestrator | 15:06:43.119 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-03-26 15:06:43.120896 | orchestrator | 15:06:43.120 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Still creating... [10s elapsed] 2025-03-26 15:06:43.127175 | orchestrator | 15:06:43.127 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-03-26 15:06:43.172690 | orchestrator | 15:06:43.172 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Still creating... [10s elapsed] 2025-03-26 15:06:43.174925 | orchestrator | 15:06:43.174 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Still creating... [10s elapsed] 2025-03-26 15:06:43.250521 | orchestrator | 15:06:43.250 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 10s [id=fe7240ff-f38b-479a-b34e-426facce92bd] 2025-03-26 15:06:43.264775 | orchestrator | 15:06:43.264 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-03-26 15:06:43.274389 | orchestrator | 15:06:43.274 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Creation complete after 10s [id=958efc11-9688-4d0a-823e-66b7f0ad5251] 2025-03-26 15:06:43.283828 | orchestrator | 15:06:43.283 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-03-26 15:06:43.323773 | orchestrator | 15:06:43.323 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 10s [id=07965c29-3afe-4431-b6c9-77a12f491bee] 2025-03-26 15:06:43.329680 | orchestrator | 15:06:43.329 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-03-26 15:06:43.329799 | orchestrator | 15:06:43.329 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-03-26 15:06:43.349886 | orchestrator | 15:06:43.349 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 10s [id=288b9d42-3702-4f33-88cc-3a76a9d8c61f] 2025-03-26 15:06:43.354404 | orchestrator | 15:06:43.354 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-03-26 15:06:43.372719 | orchestrator | 15:06:43.372 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 10s [id=94cfa3e7-e847-4dc6-a1e6-c47130a707d1] 2025-03-26 15:06:43.373527 | orchestrator | 15:06:43.373 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Creation complete after 10s [id=c488efbc-4881-416e-a91a-23567b3ed08a] 2025-03-26 15:06:43.376751 | orchestrator | 15:06:43.375 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Creation complete after 10s [id=47c04996-6b70-4edc-bb1e-4eda8ee8f7b1] 2025-03-26 15:06:43.382261 | orchestrator | 15:06:43.382 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-03-26 15:06:43.389707 | orchestrator | 15:06:43.389 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-03-26 15:06:43.390698 | orchestrator | 15:06:43.390 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-03-26 15:06:43.394461 | orchestrator | 15:06:43.394 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=58134e6f4d27d49a1e8538719fd64123af2d55dd] 2025-03-26 15:06:43.395548 | orchestrator | 15:06:43.395 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Creation complete after 10s [id=173128c2-f7b8-43d8-ad86-e66bd69e13dc] 2025-03-26 15:06:43.402322 | orchestrator | 15:06:43.402 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=c344305293742c79440880d88f0a871f97960dce] 2025-03-26 15:06:43.638505 | orchestrator | 15:06:43.402 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-03-26 15:06:43.638635 | orchestrator | 15:06:43.638 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 11s [id=dc18a773-928a-46ba-9620-ba8a35eb7f3e] 2025-03-26 15:06:48.490954 | orchestrator | 15:06:48.490 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-03-26 15:06:48.773854 | orchestrator | 15:06:48.773 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 11s [id=e0b02273-8f20-4ff7-9dbe-e5279fc05177] 2025-03-26 15:06:49.149087 | orchestrator | 15:06:49.148 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=a8f187e6-4b1b-4920-967e-e14d590fed18] 2025-03-26 15:06:49.156757 | orchestrator | 15:06:49.156 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-03-26 15:06:53.266264 | orchestrator | 15:06:53.265 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-03-26 15:06:53.284386 | orchestrator | 15:06:53.284 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-03-26 15:06:53.330908 | orchestrator | 15:06:53.330 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-03-26 15:06:53.355209 | orchestrator | 15:06:53.354 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-03-26 15:06:53.382376 | orchestrator | 15:06:53.382 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-03-26 15:06:53.589892 | orchestrator | 15:06:53.589 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 11s [id=16996a57-a85a-47ec-96d8-6a9835d4cef6] 2025-03-26 15:06:53.647170 | orchestrator | 15:06:53.646 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 11s [id=88258515-cd07-41f6-b16e-86d7afde45e8] 2025-03-26 15:06:53.669253 | orchestrator | 15:06:53.668 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 11s [id=e8a4f1c8-c63b-414f-8506-2e323c8cffff] 2025-03-26 15:06:53.704678 | orchestrator | 15:06:53.704 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 11s [id=778c827f-9114-48f3-8218-520802856b8b] 2025-03-26 15:06:53.723209 | orchestrator | 15:06:53.722 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 11s [id=d0ad0993-078e-4224-b475-2eb370a9eef9] 2025-03-26 15:06:55.930373 | orchestrator | 15:06:55.929 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 7s [id=12c4fb29-98ea-4365-b774-6d7286b0dfc7] 2025-03-26 15:06:55.937366 | orchestrator | 15:06:55.937 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-03-26 15:06:55.938258 | orchestrator | 15:06:55.938 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-03-26 15:06:55.940846 | orchestrator | 15:06:55.940 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-03-26 15:06:56.060751 | orchestrator | 15:06:56.060 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=27aaba3f-c221-4fd6-8323-5cdd5fddacce] 2025-03-26 15:06:56.073580 | orchestrator | 15:06:56.073 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-03-26 15:06:56.073666 | orchestrator | 15:06:56.073 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-03-26 15:06:56.075109 | orchestrator | 15:06:56.074 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-03-26 15:06:56.077538 | orchestrator | 15:06:56.077 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-03-26 15:06:56.078120 | orchestrator | 15:06:56.077 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-03-26 15:06:56.088419 | orchestrator | 15:06:56.088 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-03-26 15:06:56.127734 | orchestrator | 15:06:56.127 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=e11f605e-d982-455b-ad97-13cdd166f014] 2025-03-26 15:06:56.134934 | orchestrator | 15:06:56.134 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-03-26 15:06:56.135182 | orchestrator | 15:06:56.134 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-03-26 15:06:56.136969 | orchestrator | 15:06:56.136 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-03-26 15:06:56.219455 | orchestrator | 15:06:56.219 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=349bea88-94bc-4ed8-ac6e-47d903ceadef] 2025-03-26 15:06:56.231188 | orchestrator | 15:06:56.230 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-03-26 15:06:56.320818 | orchestrator | 15:06:56.320 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=251d35dd-7321-4608-bf6c-6169809895d6] 2025-03-26 15:06:56.339936 | orchestrator | 15:06:56.339 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-03-26 15:06:56.415773 | orchestrator | 15:06:56.415 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=1f239454-69b2-465c-b596-a7396e54c521] 2025-03-26 15:06:56.430559 | orchestrator | 15:06:56.430 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-03-26 15:06:56.516861 | orchestrator | 15:06:56.516 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=3cfb1f3a-1508-4cc8-8ea3-0e33f81c2463] 2025-03-26 15:06:56.530739 | orchestrator | 15:06:56.530 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-03-26 15:06:56.579562 | orchestrator | 15:06:56.579 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=6cb4d0ae-b0af-405b-857e-ccc89202da71] 2025-03-26 15:06:56.593772 | orchestrator | 15:06:56.593 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-03-26 15:06:56.658080 | orchestrator | 15:06:56.657 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=2b0d997f-a570-4ed8-8813-a3d6dc27e696] 2025-03-26 15:06:56.670432 | orchestrator | 15:06:56.670 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-03-26 15:06:56.683927 | orchestrator | 15:06:56.683 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=c20f5344-e092-45c7-896f-21c495a19ff2] 2025-03-26 15:06:56.687951 | orchestrator | 15:06:56.687 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-03-26 15:06:56.860967 | orchestrator | 15:06:56.860 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=76300e15-1ead-4360-a7be-1aafdc8398c3] 2025-03-26 15:06:56.973189 | orchestrator | 15:06:56.972 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=67524f1d-db7b-4f61-9d38-b0423b540317] 2025-03-26 15:07:01.957084 | orchestrator | 15:07:01.956 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 6s [id=41860a70-7319-4999-9c9c-a14784171338] 2025-03-26 15:07:02.272728 | orchestrator | 15:07:02.272 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 6s [id=fe0f5131-f297-40ca-a2d7-3f0b5c189118] 2025-03-26 15:07:02.442832 | orchestrator | 15:07:02.442 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 6s [id=115ee3f0-130a-4610-8154-8fb97ce4bd18] 2025-03-26 15:07:02.504876 | orchestrator | 15:07:02.504 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 7s [id=0a2bd4b0-bd54-4e5b-bb6d-72aaffe5e48a] 2025-03-26 15:07:02.572084 | orchestrator | 15:07:02.571 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=6a742607-dcf7-40c2-9ce8-a71e442caaa3] 2025-03-26 15:07:02.669216 | orchestrator | 15:07:02.668 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=68457f67-916d-465e-94fe-5372d4d0bc01] 2025-03-26 15:07:02.906082 | orchestrator | 15:07:02.905 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 7s [id=1af67a24-e0c1-4efc-9130-f663cd21cb6c] 2025-03-26 15:07:02.913241 | orchestrator | 15:07:02.912 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-03-26 15:07:03.018965 | orchestrator | 15:07:03.018 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=6e8e3e15-3f36-48e2-bdad-170766322ba3] 2025-03-26 15:07:03.055737 | orchestrator | 15:07:03.055 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-03-26 15:07:03.056356 | orchestrator | 15:07:03.056 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-03-26 15:07:03.063140 | orchestrator | 15:07:03.062 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-03-26 15:07:03.064926 | orchestrator | 15:07:03.064 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-03-26 15:07:03.065546 | orchestrator | 15:07:03.065 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-03-26 15:07:03.066667 | orchestrator | 15:07:03.066 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-03-26 15:07:09.207562 | orchestrator | 15:07:09.207 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 6s [id=5d1a1488-3b61-4c2a-96cd-8a0af06df607] 2025-03-26 15:07:09.465438 | orchestrator | 15:07:09.465 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-03-26 15:07:09.468198 | orchestrator | 15:07:09.468 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-03-26 15:07:09.469693 | orchestrator | 15:07:09.469 STDOUT terraform: local_file.inventory: Creating... 2025-03-26 15:07:09.474962 | orchestrator | 15:07:09.474 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=9be88029fd4579e97e9987238c13fd0c584bda12] 2025-03-26 15:07:09.475791 | orchestrator | 15:07:09.475 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=f68582ed12d0f16adcd8b5095ae3bdc4fd46a66a] 2025-03-26 15:07:09.931029 | orchestrator | 15:07:09.930 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=5d1a1488-3b61-4c2a-96cd-8a0af06df607] 2025-03-26 15:07:13.057162 | orchestrator | 15:07:13.056 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-03-26 15:07:13.060090 | orchestrator | 15:07:13.059 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-03-26 15:07:13.066540 | orchestrator | 15:07:13.066 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-03-26 15:07:13.068598 | orchestrator | 15:07:13.068 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-03-26 15:07:13.068816 | orchestrator | 15:07:13.068 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-03-26 15:07:13.069744 | orchestrator | 15:07:13.069 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-03-26 15:07:23.057520 | orchestrator | 15:07:23.057 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-03-26 15:07:23.060856 | orchestrator | 15:07:23.060 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-03-26 15:07:23.066734 | orchestrator | 15:07:23.066 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-03-26 15:07:23.070145 | orchestrator | 15:07:23.069 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-03-26 15:07:23.070275 | orchestrator | 15:07:23.069 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-03-26 15:07:23.479214 | orchestrator | 15:07:23.070 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-03-26 15:07:23.479332 | orchestrator | 15:07:23.478 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 20s [id=c7b602e8-7e98-4d22-a45c-d8f4db4653a8] 2025-03-26 15:07:23.647264 | orchestrator | 15:07:23.646 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 21s [id=77a6fc0c-0307-4017-92d1-008536cd1b21] 2025-03-26 15:07:23.704518 | orchestrator | 15:07:23.704 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 21s [id=0a8acce4-44dc-41c6-a9d8-3edf6451607f] 2025-03-26 15:07:33.061954 | orchestrator | 15:07:33.061 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-03-26 15:07:33.067081 | orchestrator | 15:07:33.066 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-03-26 15:07:33.070401 | orchestrator | 15:07:33.070 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-03-26 15:07:33.658277 | orchestrator | 15:07:33.657 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=8ebb556a-31a4-40d1-85fc-6d6444e18620] 2025-03-26 15:07:33.671180 | orchestrator | 15:07:33.670 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=51a6d173-e05a-4848-a478-61d4489fba23] 2025-03-26 15:07:33.681602 | orchestrator | 15:07:33.681 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=cb6ff4a9-0379-4b1c-9750-d5157c2e9cfa] 2025-03-26 15:07:33.706414 | orchestrator | 15:07:33.706 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-03-26 15:07:33.709947 | orchestrator | 15:07:33.709 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=1107995949006733374] 2025-03-26 15:07:33.719622 | orchestrator | 15:07:33.719 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[11]: Creating... 2025-03-26 15:07:33.719803 | orchestrator | 15:07:33.719 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-03-26 15:07:33.724429 | orchestrator | 15:07:33.724 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-03-26 15:07:33.725106 | orchestrator | 15:07:33.725 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-03-26 15:07:33.725705 | orchestrator | 15:07:33.725 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-03-26 15:07:33.727149 | orchestrator | 15:07:33.727 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[12]: Creating... 2025-03-26 15:07:33.733487 | orchestrator | 15:07:33.733 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-03-26 15:07:33.737300 | orchestrator | 15:07:33.737 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[16]: Creating... 2025-03-26 15:07:33.737593 | orchestrator | 15:07:33.737 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[14]: Creating... 2025-03-26 15:07:33.740804 | orchestrator | 15:07:33.740 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-03-26 15:07:39.099395 | orchestrator | 15:07:39.098 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[12]: Creation complete after 5s [id=8ebb556a-31a4-40d1-85fc-6d6444e18620/958efc11-9688-4d0a-823e-66b7f0ad5251] 2025-03-26 15:07:39.111801 | orchestrator | 15:07:39.111 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-03-26 15:07:39.114935 | orchestrator | 15:07:39.114 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 5s [id=c7b602e8-7e98-4d22-a45c-d8f4db4653a8/a37eb2c8-f05e-43da-8c12-0830eb7c9d71] 2025-03-26 15:07:39.128500 | orchestrator | 15:07:39.128 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[16]: Creation complete after 5s [id=0a8acce4-44dc-41c6-a9d8-3edf6451607f/3084e68a-ebad-4437-bd94-82607113cb35] 2025-03-26 15:07:39.130530 | orchestrator | 15:07:39.130 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[15]: Creating... 2025-03-26 15:07:39.138362 | orchestrator | 15:07:39.138 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[9]: Creating... 2025-03-26 15:07:39.142404 | orchestrator | 15:07:39.142 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[14]: Creation complete after 5s [id=51a6d173-e05a-4848-a478-61d4489fba23/d3112d8a-8c3b-4cce-9d8e-e42180f2448c] 2025-03-26 15:07:39.142954 | orchestrator | 15:07:39.142 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[11]: Creation complete after 5s [id=77a6fc0c-0307-4017-92d1-008536cd1b21/173128c2-f7b8-43d8-ad86-e66bd69e13dc] 2025-03-26 15:07:39.156685 | orchestrator | 15:07:39.156 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 5s [id=8ebb556a-31a4-40d1-85fc-6d6444e18620/07965c29-3afe-4431-b6c9-77a12f491bee] 2025-03-26 15:07:39.157456 | orchestrator | 15:07:39.156 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 5s [id=cb6ff4a9-0379-4b1c-9750-d5157c2e9cfa/3983f5b4-5dfc-4eda-b37e-7ecf02ef52dc] 2025-03-26 15:07:39.158424 | orchestrator | 15:07:39.158 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 5s [id=0a8acce4-44dc-41c6-a9d8-3edf6451607f/1f0ffd5a-0515-4e82-9e1a-64c4889ae37d] 2025-03-26 15:07:39.161258 | orchestrator | 15:07:39.161 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-03-26 15:07:39.162703 | orchestrator | 15:07:39.162 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[13]: Creating... 2025-03-26 15:07:39.170830 | orchestrator | 15:07:39.170 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 5s [id=77a6fc0c-0307-4017-92d1-008536cd1b21/018eeca1-6df5-40cc-92d0-1bd4b2f4c48a] 2025-03-26 15:07:39.172163 | orchestrator | 15:07:39.171 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[10]: Creating... 2025-03-26 15:07:39.177190 | orchestrator | 15:07:39.177 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-03-26 15:07:39.179716 | orchestrator | 15:07:39.179 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[17]: Creating... 2025-03-26 15:07:39.188175 | orchestrator | 15:07:39.188 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-03-26 15:07:39.260220 | orchestrator | 15:07:39.259 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 5s [id=cb6ff4a9-0379-4b1c-9750-d5157c2e9cfa/e160b5d2-bb2c-4d46-9a19-0f90a2a200dd] 2025-03-26 15:07:44.422093 | orchestrator | 15:07:44.421 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[15]: Creation complete after 5s [id=c7b602e8-7e98-4d22-a45c-d8f4db4653a8/111b4eab-b6cc-4138-b8ce-a1d0ca0ede1d] 2025-03-26 15:07:44.429511 | orchestrator | 15:07:44.429 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 5s [id=51a6d173-e05a-4848-a478-61d4489fba23/fe7240ff-f38b-479a-b34e-426facce92bd] 2025-03-26 15:07:44.514250 | orchestrator | 15:07:44.513 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 6s [id=8ebb556a-31a4-40d1-85fc-6d6444e18620/94cfa3e7-e847-4dc6-a1e6-c47130a707d1] 2025-03-26 15:07:44.518918 | orchestrator | 15:07:44.518 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[9]: Creation complete after 6s [id=c7b602e8-7e98-4d22-a45c-d8f4db4653a8/538d5b73-1cd2-4d51-a71b-5a4f6cbc6cf8] 2025-03-26 15:07:44.535914 | orchestrator | 15:07:44.535 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 6s [id=51a6d173-e05a-4848-a478-61d4489fba23/288b9d42-3702-4f33-88cc-3a76a9d8c61f] 2025-03-26 15:07:44.544586 | orchestrator | 15:07:44.544 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[10]: Creation complete after 6s [id=0a8acce4-44dc-41c6-a9d8-3edf6451607f/461b6804-a72b-4d73-873c-350a148be214] 2025-03-26 15:07:44.547365 | orchestrator | 15:07:44.547 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[17]: Creation complete after 6s [id=77a6fc0c-0307-4017-92d1-008536cd1b21/47c04996-6b70-4edc-bb1e-4eda8ee8f7b1] 2025-03-26 15:07:44.553457 | orchestrator | 15:07:44.553 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[13]: Creation complete after 6s [id=cb6ff4a9-0379-4b1c-9750-d5157c2e9cfa/c488efbc-4881-416e-a91a-23567b3ed08a] 2025-03-26 15:07:49.189912 | orchestrator | 15:07:49.189 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-03-26 15:07:59.193418 | orchestrator | 15:07:59.193 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-03-26 15:07:59.797238 | orchestrator | 15:07:59.796 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=b61d0add-9193-4730-969c-13215f7ba531] 2025-03-26 15:07:59.863274 | orchestrator | 15:07:59.862 STDOUT terraform: Apply complete! Resources: 82 added, 0 changed, 0 destroyed. 2025-03-26 15:07:59.863367 | orchestrator | 15:07:59.863 STDOUT terraform: Outputs: 2025-03-26 15:07:59.863390 | orchestrator | 15:07:59.863 STDOUT terraform: manager_address = 2025-03-26 15:07:59.872979 | orchestrator | 15:07:59.863 STDOUT terraform: private_key = 2025-03-26 15:08:10.304961 | orchestrator | changed 2025-03-26 15:08:10.344527 | 2025-03-26 15:08:10.344670 | TASK [Fetch manager address] 2025-03-26 15:08:10.751162 | orchestrator | ok 2025-03-26 15:08:10.768768 | 2025-03-26 15:08:10.769004 | TASK [Set manager_host address] 2025-03-26 15:08:10.907249 | orchestrator | ok 2025-03-26 15:08:10.918183 | 2025-03-26 15:08:10.918298 | LOOP [Update ansible collections] 2025-03-26 15:08:11.637401 | orchestrator | changed 2025-03-26 15:08:12.376319 | orchestrator | changed 2025-03-26 15:08:12.391759 | 2025-03-26 15:08:12.391942 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-03-26 15:08:22.909305 | orchestrator | ok 2025-03-26 15:08:22.919296 | 2025-03-26 15:08:22.919407 | TASK [Wait a little longer for the manager so that everything is ready] 2025-03-26 15:09:22.957077 | orchestrator | ok 2025-03-26 15:09:22.966805 | 2025-03-26 15:09:22.967019 | TASK [Fetch manager ssh hostkey] 2025-03-26 15:09:24.010972 | orchestrator | Output suppressed because no_log was given 2025-03-26 15:09:24.029834 | 2025-03-26 15:09:24.029992 | TASK [Get ssh keypair from terraform environment] 2025-03-26 15:09:24.613918 | orchestrator | changed 2025-03-26 15:09:24.631333 | 2025-03-26 15:09:24.631471 | TASK [Point out that the following task takes some time and does not give any output] 2025-03-26 15:09:24.684334 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-03-26 15:09:24.696217 | 2025-03-26 15:09:24.696341 | TASK [Run manager part 0] 2025-03-26 15:09:25.556322 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-03-26 15:09:25.598431 | orchestrator | 2025-03-26 15:09:27.593514 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-03-26 15:09:27.593582 | orchestrator | 2025-03-26 15:09:27.593607 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-03-26 15:09:27.593626 | orchestrator | ok: [testbed-manager] 2025-03-26 15:09:29.608578 | orchestrator | 2025-03-26 15:09:29.608806 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-03-26 15:09:29.608840 | orchestrator | 2025-03-26 15:09:29.608857 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-26 15:09:29.608889 | orchestrator | ok: [testbed-manager] 2025-03-26 15:09:30.358564 | orchestrator | 2025-03-26 15:09:30.358659 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-03-26 15:09:30.358682 | orchestrator | ok: [testbed-manager] 2025-03-26 15:09:30.408104 | orchestrator | 2025-03-26 15:09:30.408177 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-03-26 15:09:30.408197 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:09:30.435772 | orchestrator | 2025-03-26 15:09:30.435818 | orchestrator | TASK [Update package cache] **************************************************** 2025-03-26 15:09:30.435832 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:09:30.467510 | orchestrator | 2025-03-26 15:09:30.467547 | orchestrator | TASK [Install required packages] *********************************************** 2025-03-26 15:09:30.467559 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:09:30.492994 | orchestrator | 2025-03-26 15:09:30.493032 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-03-26 15:09:30.493044 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:09:30.519470 | orchestrator | 2025-03-26 15:09:30.519506 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-03-26 15:09:30.519518 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:09:30.545329 | orchestrator | 2025-03-26 15:09:30.545366 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-03-26 15:09:30.545378 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:09:30.570463 | orchestrator | 2025-03-26 15:09:30.570497 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-03-26 15:09:30.570509 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:09:31.428694 | orchestrator | 2025-03-26 15:09:31.428803 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-03-26 15:09:31.428841 | orchestrator | changed: [testbed-manager] 2025-03-26 15:12:29.000808 | orchestrator | 2025-03-26 15:12:29.000889 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-03-26 15:12:29.000925 | orchestrator | changed: [testbed-manager] 2025-03-26 15:13:48.518912 | orchestrator | 2025-03-26 15:13:48.519009 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-03-26 15:13:48.519035 | orchestrator | changed: [testbed-manager] 2025-03-26 15:14:13.529138 | orchestrator | 2025-03-26 15:14:13.529253 | orchestrator | TASK [Install required packages] *********************************************** 2025-03-26 15:14:13.529286 | orchestrator | changed: [testbed-manager] 2025-03-26 15:14:23.752467 | orchestrator | 2025-03-26 15:14:23.752575 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-03-26 15:14:23.752608 | orchestrator | changed: [testbed-manager] 2025-03-26 15:14:23.799098 | orchestrator | 2025-03-26 15:14:23.799162 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-03-26 15:14:23.799198 | orchestrator | ok: [testbed-manager] 2025-03-26 15:14:24.616603 | orchestrator | 2025-03-26 15:14:24.616712 | orchestrator | TASK [Get current user] ******************************************************** 2025-03-26 15:14:24.616747 | orchestrator | ok: [testbed-manager] 2025-03-26 15:14:25.396328 | orchestrator | 2025-03-26 15:14:25.396410 | orchestrator | TASK [Create venv directory] *************************************************** 2025-03-26 15:14:25.396439 | orchestrator | changed: [testbed-manager] 2025-03-26 15:14:32.609435 | orchestrator | 2025-03-26 15:14:32.609555 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-03-26 15:14:32.609597 | orchestrator | changed: [testbed-manager] 2025-03-26 15:14:39.217206 | orchestrator | 2025-03-26 15:14:39.217273 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-03-26 15:14:39.217303 | orchestrator | changed: [testbed-manager] 2025-03-26 15:14:42.039667 | orchestrator | 2025-03-26 15:14:42.039803 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-03-26 15:14:42.039841 | orchestrator | changed: [testbed-manager] 2025-03-26 15:14:43.962457 | orchestrator | 2025-03-26 15:14:43.962567 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-03-26 15:14:43.962603 | orchestrator | changed: [testbed-manager] 2025-03-26 15:14:45.117154 | orchestrator | 2025-03-26 15:14:45.117227 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-03-26 15:14:45.117255 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-03-26 15:14:45.159292 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-03-26 15:14:45.159369 | orchestrator | 2025-03-26 15:14:45.159389 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-03-26 15:14:45.159414 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-03-26 15:14:48.462309 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-03-26 15:14:48.462410 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-03-26 15:14:48.462430 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-03-26 15:14:48.462460 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-03-26 15:14:49.051675 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-03-26 15:14:49.051797 | orchestrator | 2025-03-26 15:14:49.051818 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-03-26 15:14:49.051847 | orchestrator | changed: [testbed-manager] 2025-03-26 15:15:13.494784 | orchestrator | 2025-03-26 15:15:13.494889 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-03-26 15:15:13.494925 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-03-26 15:15:16.337692 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-03-26 15:15:16.337784 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-03-26 15:15:16.337801 | orchestrator | 2025-03-26 15:15:16.337816 | orchestrator | TASK [Install local collections] *********************************************** 2025-03-26 15:15:16.337843 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-03-26 15:15:17.919914 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-03-26 15:15:17.920017 | orchestrator | 2025-03-26 15:15:17.920040 | orchestrator | PLAY [Create operator user] **************************************************** 2025-03-26 15:15:17.920056 | orchestrator | 2025-03-26 15:15:17.920071 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-26 15:15:17.920100 | orchestrator | ok: [testbed-manager] 2025-03-26 15:15:17.966902 | orchestrator | 2025-03-26 15:15:17.966971 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-03-26 15:15:17.967001 | orchestrator | ok: [testbed-manager] 2025-03-26 15:15:18.028792 | orchestrator | 2025-03-26 15:15:18.028852 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-03-26 15:15:18.028879 | orchestrator | ok: [testbed-manager] 2025-03-26 15:15:18.828537 | orchestrator | 2025-03-26 15:15:18.828636 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-03-26 15:15:18.828673 | orchestrator | changed: [testbed-manager] 2025-03-26 15:15:19.599776 | orchestrator | 2025-03-26 15:15:19.599874 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-03-26 15:15:19.599906 | orchestrator | changed: [testbed-manager] 2025-03-26 15:15:21.060578 | orchestrator | 2025-03-26 15:15:21.060683 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-03-26 15:15:21.060719 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-03-26 15:15:22.511963 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-03-26 15:15:22.512015 | orchestrator | 2025-03-26 15:15:22.512027 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-03-26 15:15:22.512045 | orchestrator | changed: [testbed-manager] 2025-03-26 15:15:24.478697 | orchestrator | 2025-03-26 15:15:24.478804 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-03-26 15:15:24.478853 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-03-26 15:15:25.094636 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-03-26 15:15:25.094741 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-03-26 15:15:25.094761 | orchestrator | 2025-03-26 15:15:25.094777 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-03-26 15:15:25.094809 | orchestrator | changed: [testbed-manager] 2025-03-26 15:15:25.166493 | orchestrator | 2025-03-26 15:15:25.166565 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-03-26 15:15:25.166597 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:15:26.093681 | orchestrator | 2025-03-26 15:15:26.093806 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-03-26 15:15:26.093843 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-26 15:15:26.132038 | orchestrator | changed: [testbed-manager] 2025-03-26 15:15:26.132106 | orchestrator | 2025-03-26 15:15:26.132121 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-03-26 15:15:26.132147 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:15:26.169594 | orchestrator | 2025-03-26 15:15:26.169653 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-03-26 15:15:26.169679 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:15:26.208316 | orchestrator | 2025-03-26 15:15:26.208389 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-03-26 15:15:26.208424 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:15:26.259129 | orchestrator | 2025-03-26 15:15:26.259191 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-03-26 15:15:26.259218 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:15:26.991160 | orchestrator | 2025-03-26 15:15:26.991206 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-03-26 15:15:26.991222 | orchestrator | ok: [testbed-manager] 2025-03-26 15:15:28.552380 | orchestrator | 2025-03-26 15:15:28.552425 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-03-26 15:15:28.552482 | orchestrator | 2025-03-26 15:15:28.552489 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-26 15:15:28.552501 | orchestrator | ok: [testbed-manager] 2025-03-26 15:15:29.587646 | orchestrator | 2025-03-26 15:15:29.588376 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-03-26 15:15:29.588399 | orchestrator | changed: [testbed-manager] 2025-03-26 15:15:29.690717 | orchestrator | 2025-03-26 15:15:29.690789 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-26 15:15:29.690797 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-03-26 15:15:29.690803 | orchestrator | 2025-03-26 15:15:29.945280 | orchestrator | changed 2025-03-26 15:15:29.970073 | 2025-03-26 15:15:29.970217 | TASK [Point out that the log in on the manager is now possible] 2025-03-26 15:15:30.023136 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-03-26 15:15:30.036031 | 2025-03-26 15:15:30.036156 | TASK [Point out that the following task takes some time and does not give any output] 2025-03-26 15:15:30.084691 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-03-26 15:15:30.095810 | 2025-03-26 15:15:30.095919 | TASK [Run manager part 1 + 2] 2025-03-26 15:15:30.940164 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-03-26 15:15:30.999365 | orchestrator | 2025-03-26 15:15:33.583719 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-03-26 15:15:33.583790 | orchestrator | 2025-03-26 15:15:33.583817 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-26 15:15:33.583836 | orchestrator | ok: [testbed-manager] 2025-03-26 15:15:33.617865 | orchestrator | 2025-03-26 15:15:33.617919 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-03-26 15:15:33.617941 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:15:33.654610 | orchestrator | 2025-03-26 15:15:33.654650 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-03-26 15:15:33.654668 | orchestrator | ok: [testbed-manager] 2025-03-26 15:15:33.694773 | orchestrator | 2025-03-26 15:15:33.694815 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-03-26 15:15:33.694833 | orchestrator | ok: [testbed-manager] 2025-03-26 15:15:33.759281 | orchestrator | 2025-03-26 15:15:33.759327 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-03-26 15:15:33.759347 | orchestrator | ok: [testbed-manager] 2025-03-26 15:15:33.819985 | orchestrator | 2025-03-26 15:15:33.820028 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-03-26 15:15:33.820046 | orchestrator | ok: [testbed-manager] 2025-03-26 15:15:33.868382 | orchestrator | 2025-03-26 15:15:33.868432 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-03-26 15:15:33.868445 | orchestrator | included: /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-03-26 15:15:34.602093 | orchestrator | 2025-03-26 15:15:34.602160 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-03-26 15:15:34.602181 | orchestrator | ok: [testbed-manager] 2025-03-26 15:15:34.649084 | orchestrator | 2025-03-26 15:15:34.649126 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-03-26 15:15:34.649143 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:15:36.052315 | orchestrator | 2025-03-26 15:15:36.052377 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-03-26 15:15:36.052417 | orchestrator | changed: [testbed-manager] 2025-03-26 15:15:36.661359 | orchestrator | 2025-03-26 15:15:36.661434 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-03-26 15:15:36.661455 | orchestrator | ok: [testbed-manager] 2025-03-26 15:15:37.880361 | orchestrator | 2025-03-26 15:15:37.880461 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-03-26 15:15:37.880482 | orchestrator | changed: [testbed-manager] 2025-03-26 15:15:51.555225 | orchestrator | 2025-03-26 15:15:51.555314 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-03-26 15:15:51.555339 | orchestrator | changed: [testbed-manager] 2025-03-26 15:15:52.245937 | orchestrator | 2025-03-26 15:15:52.246133 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-03-26 15:15:52.246180 | orchestrator | ok: [testbed-manager] 2025-03-26 15:15:52.291820 | orchestrator | 2025-03-26 15:15:52.291914 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-03-26 15:15:52.291950 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:15:53.366127 | orchestrator | 2025-03-26 15:15:53.366267 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-03-26 15:15:53.366328 | orchestrator | changed: [testbed-manager] 2025-03-26 15:15:54.367009 | orchestrator | 2025-03-26 15:15:54.367119 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-03-26 15:15:54.367153 | orchestrator | changed: [testbed-manager] 2025-03-26 15:15:54.959442 | orchestrator | 2025-03-26 15:15:54.959968 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-03-26 15:15:54.959992 | orchestrator | changed: [testbed-manager] 2025-03-26 15:15:55.001288 | orchestrator | 2025-03-26 15:15:55.001366 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-03-26 15:15:55.001398 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-03-26 15:15:57.388639 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-03-26 15:15:57.388730 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-03-26 15:15:57.388748 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-03-26 15:15:57.388776 | orchestrator | changed: [testbed-manager] 2025-03-26 15:16:07.887585 | orchestrator | 2025-03-26 15:16:07.887658 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-03-26 15:16:07.887681 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-03-26 15:16:08.992681 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-03-26 15:16:08.992785 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-03-26 15:16:08.992804 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-03-26 15:16:08.992821 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-03-26 15:16:08.992835 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-03-26 15:16:08.992850 | orchestrator | 2025-03-26 15:16:08.992865 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-03-26 15:16:08.992915 | orchestrator | changed: [testbed-manager] 2025-03-26 15:16:09.025874 | orchestrator | 2025-03-26 15:16:09.025927 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-03-26 15:16:09.025954 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:16:12.399281 | orchestrator | 2025-03-26 15:16:12.399404 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-03-26 15:16:12.399444 | orchestrator | changed: [testbed-manager] 2025-03-26 15:16:12.442135 | orchestrator | 2025-03-26 15:16:12.442232 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-03-26 15:16:12.442262 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:17:59.709825 | orchestrator | 2025-03-26 15:17:59.709935 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-03-26 15:17:59.709973 | orchestrator | changed: [testbed-manager] 2025-03-26 15:18:00.984555 | orchestrator | 2025-03-26 15:18:00.984630 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-03-26 15:18:00.984648 | orchestrator | ok: [testbed-manager] 2025-03-26 15:18:01.087171 | orchestrator | 2025-03-26 15:18:01.087242 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-26 15:18:01.087252 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-03-26 15:18:01.087257 | orchestrator | 2025-03-26 15:18:01.228532 | orchestrator | changed 2025-03-26 15:18:01.254862 | 2025-03-26 15:18:01.254987 | TASK [Reboot manager] 2025-03-26 15:18:02.831876 | orchestrator | changed 2025-03-26 15:18:02.851036 | 2025-03-26 15:18:02.851179 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-03-26 15:18:19.276872 | orchestrator | ok 2025-03-26 15:18:19.286137 | 2025-03-26 15:18:19.286259 | TASK [Wait a little longer for the manager so that everything is ready] 2025-03-26 15:19:19.333876 | orchestrator | ok 2025-03-26 15:19:19.348710 | 2025-03-26 15:19:19.349300 | TASK [Deploy manager + bootstrap nodes] 2025-03-26 15:19:22.115079 | orchestrator | 2025-03-26 15:19:22.120073 | orchestrator | # DEPLOY MANAGER 2025-03-26 15:19:22.120115 | orchestrator | 2025-03-26 15:19:22.120133 | orchestrator | + set -e 2025-03-26 15:19:22.120179 | orchestrator | + echo 2025-03-26 15:19:22.120199 | orchestrator | + echo '# DEPLOY MANAGER' 2025-03-26 15:19:22.120216 | orchestrator | + echo 2025-03-26 15:19:22.120240 | orchestrator | + cat /opt/manager-vars.sh 2025-03-26 15:19:22.120305 | orchestrator | export NUMBER_OF_NODES=6 2025-03-26 15:19:22.120531 | orchestrator | 2025-03-26 15:19:22.120552 | orchestrator | export CEPH_VERSION=quincy 2025-03-26 15:19:22.120567 | orchestrator | export CONFIGURATION_VERSION=main 2025-03-26 15:19:22.120581 | orchestrator | export MANAGER_VERSION=8.1.0 2025-03-26 15:19:22.120595 | orchestrator | export OPENSTACK_VERSION=2024.1 2025-03-26 15:19:22.120609 | orchestrator | 2025-03-26 15:19:22.120624 | orchestrator | export ARA=false 2025-03-26 15:19:22.120638 | orchestrator | export TEMPEST=false 2025-03-26 15:19:22.120651 | orchestrator | export IS_ZUUL=true 2025-03-26 15:19:22.120665 | orchestrator | 2025-03-26 15:19:22.120679 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.171 2025-03-26 15:19:22.120694 | orchestrator | export EXTERNAL_API=false 2025-03-26 15:19:22.120708 | orchestrator | 2025-03-26 15:19:22.120721 | orchestrator | export IMAGE_USER=ubuntu 2025-03-26 15:19:22.120735 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-03-26 15:19:22.120749 | orchestrator | 2025-03-26 15:19:22.120763 | orchestrator | export CEPH_STACK=ceph-ansible 2025-03-26 15:19:22.120782 | orchestrator | 2025-03-26 15:19:22.121856 | orchestrator | + echo 2025-03-26 15:19:22.121877 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-03-26 15:19:22.121896 | orchestrator | ++ export INTERACTIVE=false 2025-03-26 15:19:22.121958 | orchestrator | ++ INTERACTIVE=false 2025-03-26 15:19:22.121973 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-03-26 15:19:22.121995 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-03-26 15:19:22.122009 | orchestrator | + source /opt/manager-vars.sh 2025-03-26 15:19:22.122069 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-03-26 15:19:22.122084 | orchestrator | ++ NUMBER_OF_NODES=6 2025-03-26 15:19:22.122102 | orchestrator | ++ export CEPH_VERSION=quincy 2025-03-26 15:19:22.185639 | orchestrator | ++ CEPH_VERSION=quincy 2025-03-26 15:19:22.185667 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-03-26 15:19:22.185682 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-03-26 15:19:22.185705 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-03-26 15:19:22.185719 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-03-26 15:19:22.185732 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-03-26 15:19:22.185746 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-03-26 15:19:22.185760 | orchestrator | ++ export ARA=false 2025-03-26 15:19:22.185774 | orchestrator | ++ ARA=false 2025-03-26 15:19:22.185788 | orchestrator | ++ export TEMPEST=false 2025-03-26 15:19:22.185802 | orchestrator | ++ TEMPEST=false 2025-03-26 15:19:22.185815 | orchestrator | ++ export IS_ZUUL=true 2025-03-26 15:19:22.185829 | orchestrator | ++ IS_ZUUL=true 2025-03-26 15:19:22.185842 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.171 2025-03-26 15:19:22.185856 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.171 2025-03-26 15:19:22.185877 | orchestrator | ++ export EXTERNAL_API=false 2025-03-26 15:19:22.185891 | orchestrator | ++ EXTERNAL_API=false 2025-03-26 15:19:22.185905 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-03-26 15:19:22.185918 | orchestrator | ++ IMAGE_USER=ubuntu 2025-03-26 15:19:22.185932 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-03-26 15:19:22.185946 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-03-26 15:19:22.185962 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-03-26 15:19:22.185976 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-03-26 15:19:22.185990 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-03-26 15:19:22.186058 | orchestrator | + docker version 2025-03-26 15:19:22.487829 | orchestrator | Client: Docker Engine - Community 2025-03-26 15:19:22.489809 | orchestrator | Version: 26.1.4 2025-03-26 15:19:22.489933 | orchestrator | API version: 1.45 2025-03-26 15:19:22.489952 | orchestrator | Go version: go1.21.11 2025-03-26 15:19:22.489967 | orchestrator | Git commit: 5650f9b 2025-03-26 15:19:22.489981 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-03-26 15:19:22.489996 | orchestrator | OS/Arch: linux/amd64 2025-03-26 15:19:22.490010 | orchestrator | Context: default 2025-03-26 15:19:22.490092 | orchestrator | 2025-03-26 15:19:22.490108 | orchestrator | Server: Docker Engine - Community 2025-03-26 15:19:22.490122 | orchestrator | Engine: 2025-03-26 15:19:22.490136 | orchestrator | Version: 26.1.4 2025-03-26 15:19:22.490150 | orchestrator | API version: 1.45 (minimum version 1.24) 2025-03-26 15:19:22.490164 | orchestrator | Go version: go1.21.11 2025-03-26 15:19:22.490179 | orchestrator | Git commit: de5c9cf 2025-03-26 15:19:22.490233 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-03-26 15:19:22.490283 | orchestrator | OS/Arch: linux/amd64 2025-03-26 15:19:22.490299 | orchestrator | Experimental: false 2025-03-26 15:19:22.490313 | orchestrator | containerd: 2025-03-26 15:19:22.490327 | orchestrator | Version: 1.7.26 2025-03-26 15:19:22.490341 | orchestrator | GitCommit: 753481ec61c7c8955a23d6ff7bc8e4daed455734 2025-03-26 15:19:22.490356 | orchestrator | runc: 2025-03-26 15:19:22.490369 | orchestrator | Version: 1.2.5 2025-03-26 15:19:22.490383 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-03-26 15:19:22.490397 | orchestrator | docker-init: 2025-03-26 15:19:22.490411 | orchestrator | Version: 0.19.0 2025-03-26 15:19:22.490425 | orchestrator | GitCommit: de40ad0 2025-03-26 15:19:22.490456 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-03-26 15:19:22.498429 | orchestrator | + set -e 2025-03-26 15:19:22.498496 | orchestrator | + source /opt/manager-vars.sh 2025-03-26 15:19:22.498531 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-03-26 15:19:22.498546 | orchestrator | ++ NUMBER_OF_NODES=6 2025-03-26 15:19:22.498559 | orchestrator | ++ export CEPH_VERSION=quincy 2025-03-26 15:19:22.498573 | orchestrator | ++ CEPH_VERSION=quincy 2025-03-26 15:19:22.498587 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-03-26 15:19:22.498602 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-03-26 15:19:22.498615 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-03-26 15:19:22.498629 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-03-26 15:19:22.498643 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-03-26 15:19:22.498656 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-03-26 15:19:22.498670 | orchestrator | ++ export ARA=false 2025-03-26 15:19:22.498684 | orchestrator | ++ ARA=false 2025-03-26 15:19:22.498697 | orchestrator | ++ export TEMPEST=false 2025-03-26 15:19:22.498711 | orchestrator | ++ TEMPEST=false 2025-03-26 15:19:22.498724 | orchestrator | ++ export IS_ZUUL=true 2025-03-26 15:19:22.498738 | orchestrator | ++ IS_ZUUL=true 2025-03-26 15:19:22.498752 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.171 2025-03-26 15:19:22.498774 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.171 2025-03-26 15:19:22.498788 | orchestrator | ++ export EXTERNAL_API=false 2025-03-26 15:19:22.498802 | orchestrator | ++ EXTERNAL_API=false 2025-03-26 15:19:22.498827 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-03-26 15:19:22.505517 | orchestrator | ++ IMAGE_USER=ubuntu 2025-03-26 15:19:22.505564 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-03-26 15:19:22.505589 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-03-26 15:19:22.505608 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-03-26 15:19:22.505622 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-03-26 15:19:22.505636 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-03-26 15:19:22.505651 | orchestrator | ++ export INTERACTIVE=false 2025-03-26 15:19:22.505664 | orchestrator | ++ INTERACTIVE=false 2025-03-26 15:19:22.505678 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-03-26 15:19:22.505692 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-03-26 15:19:22.505706 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-03-26 15:19:22.505723 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 8.1.0 2025-03-26 15:19:22.505747 | orchestrator | + set -e 2025-03-26 15:19:22.515487 | orchestrator | + VERSION=8.1.0 2025-03-26 15:19:22.515535 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 8.1.0/g' /opt/configuration/environments/manager/configuration.yml 2025-03-26 15:19:22.515576 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-03-26 15:19:22.521093 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-03-26 15:19:22.521130 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-03-26 15:19:22.525196 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-03-26 15:19:22.531915 | orchestrator | /opt/configuration ~ 2025-03-26 15:19:22.533995 | orchestrator | + set -e 2025-03-26 15:19:22.534066 | orchestrator | + pushd /opt/configuration 2025-03-26 15:19:22.534082 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-03-26 15:19:22.534102 | orchestrator | + source /opt/venv/bin/activate 2025-03-26 15:19:22.535348 | orchestrator | ++ deactivate nondestructive 2025-03-26 15:19:22.535433 | orchestrator | ++ '[' -n '' ']' 2025-03-26 15:19:22.535454 | orchestrator | ++ '[' -n '' ']' 2025-03-26 15:19:22.535469 | orchestrator | ++ hash -r 2025-03-26 15:19:22.535483 | orchestrator | ++ '[' -n '' ']' 2025-03-26 15:19:22.535496 | orchestrator | ++ unset VIRTUAL_ENV 2025-03-26 15:19:22.535511 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-03-26 15:19:22.535525 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-03-26 15:19:22.535578 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-03-26 15:19:22.535593 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-03-26 15:19:22.535606 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-03-26 15:19:22.535621 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-03-26 15:19:22.535640 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-03-26 15:19:23.880167 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-03-26 15:19:23.880334 | orchestrator | ++ export PATH 2025-03-26 15:19:23.880355 | orchestrator | ++ '[' -n '' ']' 2025-03-26 15:19:23.880370 | orchestrator | ++ '[' -z '' ']' 2025-03-26 15:19:23.880384 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-03-26 15:19:23.880399 | orchestrator | ++ PS1='(venv) ' 2025-03-26 15:19:23.880413 | orchestrator | ++ export PS1 2025-03-26 15:19:23.880427 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-03-26 15:19:23.880441 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-03-26 15:19:23.880456 | orchestrator | ++ hash -r 2025-03-26 15:19:23.880471 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-03-26 15:19:23.880507 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-03-26 15:19:23.882210 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.3) 2025-03-26 15:19:23.882829 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-03-26 15:19:23.884702 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-03-26 15:19:23.885870 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (24.2) 2025-03-26 15:19:23.895944 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.1.8) 2025-03-26 15:19:23.897592 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-03-26 15:19:23.898789 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.19) 2025-03-26 15:19:23.900222 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-03-26 15:19:23.933081 | orchestrator | Requirement already satisfied: charset-normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.1) 2025-03-26 15:19:23.935358 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-03-26 15:19:23.936732 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.3.0) 2025-03-26 15:19:23.938448 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.1.31) 2025-03-26 15:19:23.942586 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-03-26 15:19:24.170827 | orchestrator | ++ which gilt 2025-03-26 15:19:24.175191 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-03-26 15:19:24.476557 | orchestrator | + /opt/venv/bin/gilt overlay 2025-03-26 15:19:24.476656 | orchestrator | osism.cfg-generics: 2025-03-26 15:19:26.106136 | orchestrator | - cloning osism.cfg-generics to /home/dragon/.gilt/clone/github.com/osism.cfg-generics 2025-03-26 15:19:26.106370 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-03-26 15:19:26.106513 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-03-26 15:19:26.106544 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-03-26 15:19:27.154790 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-03-26 15:19:27.154923 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-03-26 15:19:27.167683 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-03-26 15:19:27.562150 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-03-26 15:19:27.629524 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-03-26 15:19:27.629877 | orchestrator | + deactivate 2025-03-26 15:19:27.629923 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-03-26 15:19:27.629939 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-03-26 15:19:27.629953 | orchestrator | + export PATH 2025-03-26 15:19:27.629967 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-03-26 15:19:27.629982 | orchestrator | + '[' -n '' ']' 2025-03-26 15:19:27.629996 | orchestrator | + hash -r 2025-03-26 15:19:27.630010 | orchestrator | + '[' -n '' ']' 2025-03-26 15:19:27.630123 | orchestrator | + unset VIRTUAL_ENV 2025-03-26 15:19:27.630139 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-03-26 15:19:27.630153 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-03-26 15:19:27.630173 | orchestrator | ~ 2025-03-26 15:19:27.631566 | orchestrator | + unset -f deactivate 2025-03-26 15:19:27.631587 | orchestrator | + popd 2025-03-26 15:19:27.631606 | orchestrator | + [[ 8.1.0 == \l\a\t\e\s\t ]] 2025-03-26 15:19:27.632333 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-03-26 15:19:27.632357 | orchestrator | ++ semver 8.1.0 7.0.0 2025-03-26 15:19:27.683859 | orchestrator | + [[ 1 -ge 0 ]] 2025-03-26 15:19:27.727834 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-03-26 15:19:27.727901 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-03-26 15:19:27.727949 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-03-26 15:19:27.727987 | orchestrator | + source /opt/venv/bin/activate 2025-03-26 15:19:27.728016 | orchestrator | ++ deactivate nondestructive 2025-03-26 15:19:27.728047 | orchestrator | ++ '[' -n '' ']' 2025-03-26 15:19:27.728068 | orchestrator | ++ '[' -n '' ']' 2025-03-26 15:19:27.728278 | orchestrator | ++ hash -r 2025-03-26 15:19:27.728328 | orchestrator | ++ '[' -n '' ']' 2025-03-26 15:19:27.728612 | orchestrator | ++ unset VIRTUAL_ENV 2025-03-26 15:19:27.728633 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-03-26 15:19:27.728655 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-03-26 15:19:27.728674 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-03-26 15:19:27.728765 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-03-26 15:19:27.728789 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-03-26 15:19:27.728804 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-03-26 15:19:27.728819 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-03-26 15:19:27.728834 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-03-26 15:19:27.728848 | orchestrator | ++ export PATH 2025-03-26 15:19:27.728882 | orchestrator | ++ '[' -n '' ']' 2025-03-26 15:19:27.728903 | orchestrator | ++ '[' -z '' ']' 2025-03-26 15:19:27.728917 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-03-26 15:19:27.728931 | orchestrator | ++ PS1='(venv) ' 2025-03-26 15:19:27.728949 | orchestrator | ++ export PS1 2025-03-26 15:19:27.729082 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-03-26 15:19:27.729105 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-03-26 15:19:27.729368 | orchestrator | ++ hash -r 2025-03-26 15:19:29.106711 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-03-26 15:19:29.106891 | orchestrator | 2025-03-26 15:19:29.745691 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-03-26 15:19:29.745812 | orchestrator | 2025-03-26 15:19:29.745821 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-03-26 15:19:29.745841 | orchestrator | ok: [testbed-manager] 2025-03-26 15:19:30.860797 | orchestrator | 2025-03-26 15:19:30.860951 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-03-26 15:19:30.860992 | orchestrator | changed: [testbed-manager] 2025-03-26 15:19:33.509684 | orchestrator | 2025-03-26 15:19:33.509805 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-03-26 15:19:33.509825 | orchestrator | 2025-03-26 15:19:33.509841 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-26 15:19:33.509872 | orchestrator | ok: [testbed-manager] 2025-03-26 15:19:39.576048 | orchestrator | 2025-03-26 15:19:39.576177 | orchestrator | TASK [Pull images] ************************************************************* 2025-03-26 15:19:39.576290 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ara-server:1.7.2) 2025-03-26 15:20:36.281241 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/mariadb:11.6.2) 2025-03-26 15:20:36.281374 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ceph-ansible:8.1.0) 2025-03-26 15:20:36.281395 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/inventory-reconciler:8.1.0) 2025-03-26 15:20:36.281412 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/kolla-ansible:8.1.0) 2025-03-26 15:20:36.281427 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/redis:7.4.1-alpine) 2025-03-26 15:20:36.281441 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/netbox:v4.1.7) 2025-03-26 15:20:36.281456 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism-ansible:8.1.0) 2025-03-26 15:20:36.281470 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism:0.20241219.2) 2025-03-26 15:20:36.281492 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/postgres:16.6-alpine) 2025-03-26 15:20:36.281507 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/traefik:v3.2.1) 2025-03-26 15:20:36.281521 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/hashicorp/vault:1.18.2) 2025-03-26 15:20:36.281535 | orchestrator | 2025-03-26 15:20:36.281549 | orchestrator | TASK [Check status] ************************************************************ 2025-03-26 15:20:36.281581 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-03-26 15:20:36.326416 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-03-26 15:20:36.326471 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (118 retries left). 2025-03-26 15:20:36.326487 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j558712221781.1568', 'results_file': '/home/dragon/.ansible_async/j558712221781.1568', 'changed': True, 'item': 'registry.osism.tech/osism/ara-server:1.7.2', 'ansible_loop_var': 'item'}) 2025-03-26 15:20:36.326516 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j118979630188.1593', 'results_file': '/home/dragon/.ansible_async/j118979630188.1593', 'changed': True, 'item': 'index.docker.io/library/mariadb:11.6.2', 'ansible_loop_var': 'item'}) 2025-03-26 15:20:36.326533 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-03-26 15:20:36.326548 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j997967891451.1618', 'results_file': '/home/dragon/.ansible_async/j997967891451.1618', 'changed': True, 'item': 'registry.osism.tech/osism/ceph-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-03-26 15:20:36.326570 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j600511858778.1650', 'results_file': '/home/dragon/.ansible_async/j600511858778.1650', 'changed': True, 'item': 'registry.osism.tech/osism/inventory-reconciler:8.1.0', 'ansible_loop_var': 'item'}) 2025-03-26 15:20:36.326588 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j104408652255.1682', 'results_file': '/home/dragon/.ansible_async/j104408652255.1682', 'changed': True, 'item': 'registry.osism.tech/osism/kolla-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-03-26 15:20:36.326603 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j240219793102.1714', 'results_file': '/home/dragon/.ansible_async/j240219793102.1714', 'changed': True, 'item': 'index.docker.io/library/redis:7.4.1-alpine', 'ansible_loop_var': 'item'}) 2025-03-26 15:20:36.326617 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-03-26 15:20:36.326631 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j982582934559.1747', 'results_file': '/home/dragon/.ansible_async/j982582934559.1747', 'changed': True, 'item': 'registry.osism.tech/osism/netbox:v4.1.7', 'ansible_loop_var': 'item'}) 2025-03-26 15:20:36.326674 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j134042138597.1780', 'results_file': '/home/dragon/.ansible_async/j134042138597.1780', 'changed': True, 'item': 'registry.osism.tech/osism/osism-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-03-26 15:20:36.326689 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j182805152.1813', 'results_file': '/home/dragon/.ansible_async/j182805152.1813', 'changed': True, 'item': 'registry.osism.tech/osism/osism:0.20241219.2', 'ansible_loop_var': 'item'}) 2025-03-26 15:20:36.326703 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j411606674421.1853', 'results_file': '/home/dragon/.ansible_async/j411606674421.1853', 'changed': True, 'item': 'index.docker.io/library/postgres:16.6-alpine', 'ansible_loop_var': 'item'}) 2025-03-26 15:20:36.326717 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j192039517170.1887', 'results_file': '/home/dragon/.ansible_async/j192039517170.1887', 'changed': True, 'item': 'index.docker.io/library/traefik:v3.2.1', 'ansible_loop_var': 'item'}) 2025-03-26 15:20:36.326731 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j322528256973.1919', 'results_file': '/home/dragon/.ansible_async/j322528256973.1919', 'changed': True, 'item': 'index.docker.io/hashicorp/vault:1.18.2', 'ansible_loop_var': 'item'}) 2025-03-26 15:20:36.326745 | orchestrator | 2025-03-26 15:20:36.326760 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-03-26 15:20:36.326785 | orchestrator | ok: [testbed-manager] 2025-03-26 15:20:36.901837 | orchestrator | 2025-03-26 15:20:36.901947 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-03-26 15:20:36.901982 | orchestrator | changed: [testbed-manager] 2025-03-26 15:20:37.267843 | orchestrator | 2025-03-26 15:20:37.267936 | orchestrator | TASK [Add netbox_postgres_volume_type parameter] ******************************* 2025-03-26 15:20:37.267969 | orchestrator | changed: [testbed-manager] 2025-03-26 15:20:37.669522 | orchestrator | 2025-03-26 15:20:37.669605 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-03-26 15:20:37.669636 | orchestrator | changed: [testbed-manager] 2025-03-26 15:20:37.727331 | orchestrator | 2025-03-26 15:20:37.727360 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-03-26 15:20:37.727380 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:20:38.094404 | orchestrator | 2025-03-26 15:20:38.095452 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-03-26 15:20:38.095501 | orchestrator | ok: [testbed-manager] 2025-03-26 15:20:38.273755 | orchestrator | 2025-03-26 15:20:38.273850 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-03-26 15:20:38.273883 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:20:40.436229 | orchestrator | 2025-03-26 15:20:40.436375 | orchestrator | PLAY [Apply role traefik & netbox] ********************************************* 2025-03-26 15:20:40.436439 | orchestrator | 2025-03-26 15:20:40.436457 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-26 15:20:40.436488 | orchestrator | ok: [testbed-manager] 2025-03-26 15:20:40.701104 | orchestrator | 2025-03-26 15:20:40.701199 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-03-26 15:20:40.701235 | orchestrator | 2025-03-26 15:20:40.816885 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-03-26 15:20:40.816952 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-03-26 15:20:42.076582 | orchestrator | 2025-03-26 15:20:42.076706 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-03-26 15:20:42.076743 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-03-26 15:20:44.121366 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-03-26 15:20:44.121536 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-03-26 15:20:44.121555 | orchestrator | 2025-03-26 15:20:44.121570 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-03-26 15:20:44.121602 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-03-26 15:20:44.886732 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-03-26 15:20:44.886842 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-03-26 15:20:44.886862 | orchestrator | 2025-03-26 15:20:44.886878 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-03-26 15:20:44.886908 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-26 15:20:45.666810 | orchestrator | changed: [testbed-manager] 2025-03-26 15:20:45.666896 | orchestrator | 2025-03-26 15:20:45.666911 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-03-26 15:20:45.666938 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-26 15:20:45.761279 | orchestrator | changed: [testbed-manager] 2025-03-26 15:20:45.761303 | orchestrator | 2025-03-26 15:20:45.761316 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-03-26 15:20:45.761334 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:20:46.200222 | orchestrator | 2025-03-26 15:20:46.200331 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-03-26 15:20:46.200362 | orchestrator | ok: [testbed-manager] 2025-03-26 15:20:46.322587 | orchestrator | 2025-03-26 15:20:46.322641 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-03-26 15:20:46.322664 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-03-26 15:20:47.501020 | orchestrator | 2025-03-26 15:20:47.501120 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-03-26 15:20:47.501149 | orchestrator | changed: [testbed-manager] 2025-03-26 15:20:48.513761 | orchestrator | 2025-03-26 15:20:48.513846 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-03-26 15:20:48.513875 | orchestrator | changed: [testbed-manager] 2025-03-26 15:20:51.776346 | orchestrator | 2025-03-26 15:20:51.776463 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-03-26 15:20:51.776498 | orchestrator | changed: [testbed-manager] 2025-03-26 15:20:52.239442 | orchestrator | 2025-03-26 15:20:52.239495 | orchestrator | TASK [Apply netbox role] ******************************************************* 2025-03-26 15:20:52.239520 | orchestrator | 2025-03-26 15:20:52.371198 | orchestrator | TASK [osism.services.netbox : Include install tasks] *************************** 2025-03-26 15:20:52.371249 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/install-Debian-family.yml for testbed-manager 2025-03-26 15:20:55.446616 | orchestrator | 2025-03-26 15:20:55.446725 | orchestrator | TASK [osism.services.netbox : Install required packages] *********************** 2025-03-26 15:20:55.446756 | orchestrator | ok: [testbed-manager] 2025-03-26 15:20:55.693304 | orchestrator | 2025-03-26 15:20:55.693393 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-03-26 15:20:55.693424 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config.yml for testbed-manager 2025-03-26 15:20:56.913135 | orchestrator | 2025-03-26 15:20:56.913269 | orchestrator | TASK [osism.services.netbox : Create required directories] ********************* 2025-03-26 15:20:56.913930 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox) 2025-03-26 15:20:57.054227 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration) 2025-03-26 15:20:57.054273 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/secrets) 2025-03-26 15:20:57.054288 | orchestrator | 2025-03-26 15:20:57.054302 | orchestrator | TASK [osism.services.netbox : Include postgres config tasks] ******************* 2025-03-26 15:20:57.054325 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-postgres.yml for testbed-manager 2025-03-26 15:20:57.798935 | orchestrator | 2025-03-26 15:20:57.799065 | orchestrator | TASK [osism.services.netbox : Copy postgres environment files] ***************** 2025-03-26 15:20:57.799096 | orchestrator | changed: [testbed-manager] => (item=postgres) 2025-03-26 15:20:58.533464 | orchestrator | 2025-03-26 15:20:58.533556 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-03-26 15:20:58.533599 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-26 15:20:58.980071 | orchestrator | changed: [testbed-manager] 2025-03-26 15:20:58.980128 | orchestrator | 2025-03-26 15:20:58.980143 | orchestrator | TASK [osism.services.netbox : Create docker-entrypoint-initdb.d directory] ***** 2025-03-26 15:20:58.980168 | orchestrator | changed: [testbed-manager] 2025-03-26 15:20:59.369349 | orchestrator | 2025-03-26 15:20:59.369392 | orchestrator | TASK [osism.services.netbox : Check if init.sql file exists] ******************* 2025-03-26 15:20:59.369413 | orchestrator | ok: [testbed-manager] 2025-03-26 15:20:59.432439 | orchestrator | 2025-03-26 15:20:59.432468 | orchestrator | TASK [osism.services.netbox : Copy init.sql file] ****************************** 2025-03-26 15:20:59.432488 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:21:00.142343 | orchestrator | 2025-03-26 15:21:00.142410 | orchestrator | TASK [osism.services.netbox : Create init-netbox-database.sh script] *********** 2025-03-26 15:21:00.142434 | orchestrator | changed: [testbed-manager] 2025-03-26 15:21:00.275720 | orchestrator | 2025-03-26 15:21:00.275750 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-03-26 15:21:00.275772 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-netbox.yml for testbed-manager 2025-03-26 15:21:01.109726 | orchestrator | 2025-03-26 15:21:01.109836 | orchestrator | TASK [osism.services.netbox : Create directories required by netbox] *********** 2025-03-26 15:21:01.109866 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/initializers) 2025-03-26 15:21:01.868256 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/startup-scripts) 2025-03-26 15:21:01.868368 | orchestrator | 2025-03-26 15:21:01.868386 | orchestrator | TASK [osism.services.netbox : Copy netbox environment files] ******************* 2025-03-26 15:21:01.868416 | orchestrator | changed: [testbed-manager] => (item=netbox) 2025-03-26 15:21:02.671613 | orchestrator | 2025-03-26 15:21:02.671727 | orchestrator | TASK [osism.services.netbox : Copy netbox configuration file] ****************** 2025-03-26 15:21:02.671760 | orchestrator | changed: [testbed-manager] 2025-03-26 15:21:02.740992 | orchestrator | 2025-03-26 15:21:02.741025 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (<= 1.26)] **** 2025-03-26 15:21:02.741048 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:21:03.456070 | orchestrator | 2025-03-26 15:21:03.456156 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (> 1.26)] ***** 2025-03-26 15:21:03.456186 | orchestrator | changed: [testbed-manager] 2025-03-26 15:21:05.508195 | orchestrator | 2025-03-26 15:21:05.508314 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-03-26 15:21:05.508351 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-26 15:21:12.102521 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-26 15:21:12.102652 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-26 15:21:12.102672 | orchestrator | changed: [testbed-manager] 2025-03-26 15:21:12.102690 | orchestrator | 2025-03-26 15:21:12.102705 | orchestrator | TASK [osism.services.netbox : Deploy initializers for netbox] ****************** 2025-03-26 15:21:12.102740 | orchestrator | changed: [testbed-manager] => (item=custom_fields) 2025-03-26 15:21:12.934648 | orchestrator | changed: [testbed-manager] => (item=device_roles) 2025-03-26 15:21:12.934759 | orchestrator | changed: [testbed-manager] => (item=device_types) 2025-03-26 15:21:12.934777 | orchestrator | changed: [testbed-manager] => (item=groups) 2025-03-26 15:21:12.934792 | orchestrator | changed: [testbed-manager] => (item=manufacturers) 2025-03-26 15:21:12.934807 | orchestrator | changed: [testbed-manager] => (item=object_permissions) 2025-03-26 15:21:12.934821 | orchestrator | changed: [testbed-manager] => (item=prefix_vlan_roles) 2025-03-26 15:21:12.934835 | orchestrator | changed: [testbed-manager] => (item=sites) 2025-03-26 15:21:12.934849 | orchestrator | changed: [testbed-manager] => (item=tags) 2025-03-26 15:21:12.934863 | orchestrator | changed: [testbed-manager] => (item=users) 2025-03-26 15:21:12.934877 | orchestrator | 2025-03-26 15:21:12.934892 | orchestrator | TASK [osism.services.netbox : Deploy startup scripts for netbox] *************** 2025-03-26 15:21:12.934991 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/files/startup-scripts/270_tags.py) 2025-03-26 15:21:13.125890 | orchestrator | 2025-03-26 15:21:13.125988 | orchestrator | TASK [osism.services.netbox : Include service tasks] *************************** 2025-03-26 15:21:13.126015 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/service.yml for testbed-manager 2025-03-26 15:21:13.967339 | orchestrator | 2025-03-26 15:21:13.967449 | orchestrator | TASK [osism.services.netbox : Copy netbox systemd unit file] ******************* 2025-03-26 15:21:13.967482 | orchestrator | changed: [testbed-manager] 2025-03-26 15:21:14.704476 | orchestrator | 2025-03-26 15:21:14.704576 | orchestrator | TASK [osism.services.netbox : Create traefik external network] ***************** 2025-03-26 15:21:14.704611 | orchestrator | ok: [testbed-manager] 2025-03-26 15:21:15.437156 | orchestrator | 2025-03-26 15:21:15.437262 | orchestrator | TASK [osism.services.netbox : Copy docker-compose.yml file] ******************** 2025-03-26 15:21:15.437298 | orchestrator | changed: [testbed-manager] 2025-03-26 15:21:20.055600 | orchestrator | 2025-03-26 15:21:20.055737 | orchestrator | TASK [osism.services.netbox : Pull container images] *************************** 2025-03-26 15:21:20.055775 | orchestrator | changed: [testbed-manager] 2025-03-26 15:21:21.123721 | orchestrator | 2025-03-26 15:21:21.123844 | orchestrator | TASK [osism.services.netbox : Stop and disable old service docker-compose@netbox] *** 2025-03-26 15:21:21.123916 | orchestrator | ok: [testbed-manager] 2025-03-26 15:21:43.548652 | orchestrator | 2025-03-26 15:21:43.548783 | orchestrator | TASK [osism.services.netbox : Manage netbox service] *************************** 2025-03-26 15:21:43.548844 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage netbox service (10 retries left). 2025-03-26 15:21:43.656410 | orchestrator | ok: [testbed-manager] 2025-03-26 15:21:43.656473 | orchestrator | 2025-03-26 15:21:43.656485 | orchestrator | TASK [osism.services.netbox : Register that netbox service was started] ******** 2025-03-26 15:21:43.656504 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:21:43.729298 | orchestrator | 2025-03-26 15:21:43.729334 | orchestrator | TASK [osism.services.netbox : Flush handlers] ********************************** 2025-03-26 15:21:43.729347 | orchestrator | 2025-03-26 15:21:43.729360 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-03-26 15:21:43.729379 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:21:43.850698 | orchestrator | 2025-03-26 15:21:43.850776 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-03-26 15:21:43.850807 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/restart-service.yml for testbed-manager 2025-03-26 15:21:44.887981 | orchestrator | 2025-03-26 15:21:44.888089 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres container] ****** 2025-03-26 15:21:44.888123 | orchestrator | ok: [testbed-manager] 2025-03-26 15:21:44.993923 | orchestrator | 2025-03-26 15:21:44.993961 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres container version fact] *** 2025-03-26 15:21:44.993983 | orchestrator | ok: [testbed-manager] 2025-03-26 15:21:45.056138 | orchestrator | 2025-03-26 15:21:45.056214 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres container] *** 2025-03-26 15:21:45.056239 | orchestrator | ok: [testbed-manager] => { 2025-03-26 15:21:45.845217 | orchestrator | "msg": "The major version of the running postgres container is 16" 2025-03-26 15:21:45.845310 | orchestrator | } 2025-03-26 15:21:45.845323 | orchestrator | 2025-03-26 15:21:45.845333 | orchestrator | RUNNING HANDLER [osism.services.netbox : Pull postgres image] ****************** 2025-03-26 15:21:45.845359 | orchestrator | ok: [testbed-manager] 2025-03-26 15:21:46.937449 | orchestrator | 2025-03-26 15:21:46.937541 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres image] ********** 2025-03-26 15:21:46.937570 | orchestrator | ok: [testbed-manager] 2025-03-26 15:21:47.046066 | orchestrator | 2025-03-26 15:21:47.046100 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres image version fact] ****** 2025-03-26 15:21:47.046118 | orchestrator | ok: [testbed-manager] 2025-03-26 15:21:47.116878 | orchestrator | 2025-03-26 15:21:47.116902 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres image] *** 2025-03-26 15:21:47.116920 | orchestrator | ok: [testbed-manager] => { 2025-03-26 15:21:47.222392 | orchestrator | "msg": "The major version of the postgres image is 16" 2025-03-26 15:21:47.222431 | orchestrator | } 2025-03-26 15:21:47.222471 | orchestrator | 2025-03-26 15:21:47.222484 | orchestrator | RUNNING HANDLER [osism.services.netbox : Stop netbox service] ****************** 2025-03-26 15:21:47.222513 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:21:47.287571 | orchestrator | 2025-03-26 15:21:47.287619 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to stop] ****** 2025-03-26 15:21:47.287638 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:21:47.387420 | orchestrator | 2025-03-26 15:21:47.387456 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres volume] ********* 2025-03-26 15:21:47.387474 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:21:47.449669 | orchestrator | 2025-03-26 15:21:47.449695 | orchestrator | RUNNING HANDLER [osism.services.netbox : Upgrade postgres database] ************ 2025-03-26 15:21:47.449713 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:21:47.522215 | orchestrator | 2025-03-26 15:21:47.522240 | orchestrator | RUNNING HANDLER [osism.services.netbox : Remove netbox-pgautoupgrade container] *** 2025-03-26 15:21:47.522258 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:21:47.606631 | orchestrator | 2025-03-26 15:21:47.606657 | orchestrator | RUNNING HANDLER [osism.services.netbox : Start netbox service] ***************** 2025-03-26 15:21:47.606674 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:21:49.170443 | orchestrator | 2025-03-26 15:21:49.170509 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-03-26 15:21:49.170528 | orchestrator | changed: [testbed-manager] 2025-03-26 15:21:49.284666 | orchestrator | 2025-03-26 15:21:49.284746 | orchestrator | RUNNING HANDLER [osism.services.netbox : Register that netbox service was started] *** 2025-03-26 15:21:49.284778 | orchestrator | ok: [testbed-manager] 2025-03-26 15:22:49.369103 | orchestrator | 2025-03-26 15:22:49.369265 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to start] ***** 2025-03-26 15:22:49.369320 | orchestrator | Pausing for 60 seconds 2025-03-26 15:22:49.465870 | orchestrator | changed: [testbed-manager] 2025-03-26 15:22:49.465910 | orchestrator | 2025-03-26 15:22:49.465925 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for an healthy netbox service] *** 2025-03-26 15:22:49.465950 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/wait-for-healthy-service.yml for testbed-manager 2025-03-26 15:28:07.552826 | orchestrator | 2025-03-26 15:28:07.552972 | orchestrator | RUNNING HANDLER [osism.services.netbox : Check that all containers are in a good state] *** 2025-03-26 15:28:07.553028 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (60 retries left). 2025-03-26 15:28:10.049452 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (59 retries left). 2025-03-26 15:28:10.049553 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (58 retries left). 2025-03-26 15:28:10.049573 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (57 retries left). 2025-03-26 15:28:10.049591 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (56 retries left). 2025-03-26 15:28:10.049606 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (55 retries left). 2025-03-26 15:28:10.049622 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (54 retries left). 2025-03-26 15:28:10.049636 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (53 retries left). 2025-03-26 15:28:10.049651 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (52 retries left). 2025-03-26 15:28:10.049666 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (51 retries left). 2025-03-26 15:28:10.049680 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (50 retries left). 2025-03-26 15:28:10.049695 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (49 retries left). 2025-03-26 15:28:10.049710 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (48 retries left). 2025-03-26 15:28:10.049725 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (47 retries left). 2025-03-26 15:28:10.049762 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (46 retries left). 2025-03-26 15:28:10.049778 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (45 retries left). 2025-03-26 15:28:10.049792 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (44 retries left). 2025-03-26 15:28:10.049807 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (43 retries left). 2025-03-26 15:28:10.049822 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (42 retries left). 2025-03-26 15:28:10.049846 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (41 retries left). 2025-03-26 15:28:10.049861 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (40 retries left). 2025-03-26 15:28:10.049876 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (39 retries left). 2025-03-26 15:28:10.049891 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (38 retries left). 2025-03-26 15:28:10.049905 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (37 retries left). 2025-03-26 15:28:10.049919 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (36 retries left). 2025-03-26 15:28:10.049934 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (35 retries left). 2025-03-26 15:28:10.049948 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (34 retries left). 2025-03-26 15:28:10.049962 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (33 retries left). 2025-03-26 15:28:10.049977 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (32 retries left). 2025-03-26 15:28:10.049991 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (31 retries left). 2025-03-26 15:28:10.050006 | orchestrator | changed: [testbed-manager] 2025-03-26 15:28:10.050113 | orchestrator | 2025-03-26 15:28:10.050132 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-03-26 15:28:10.050148 | orchestrator | 2025-03-26 15:28:10.050164 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-26 15:28:10.050194 | orchestrator | ok: [testbed-manager] 2025-03-26 15:28:10.189139 | orchestrator | 2025-03-26 15:28:10.189209 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-03-26 15:28:10.189236 | orchestrator | 2025-03-26 15:28:10.254656 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-03-26 15:28:10.254698 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-03-26 15:28:12.285856 | orchestrator | 2025-03-26 15:28:12.285974 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-03-26 15:28:12.286011 | orchestrator | ok: [testbed-manager] 2025-03-26 15:28:12.346582 | orchestrator | 2025-03-26 15:28:12.346651 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-03-26 15:28:12.346680 | orchestrator | ok: [testbed-manager] 2025-03-26 15:28:12.473123 | orchestrator | 2025-03-26 15:28:12.473294 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-03-26 15:28:12.473331 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-03-26 15:28:15.607533 | orchestrator | 2025-03-26 15:28:15.607748 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-03-26 15:28:15.607790 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-03-26 15:28:16.338726 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-03-26 15:28:16.338810 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-03-26 15:28:16.338825 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-03-26 15:28:16.338854 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-03-26 15:28:16.338865 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-03-26 15:28:16.338876 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-03-26 15:28:16.338886 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-03-26 15:28:16.338897 | orchestrator | 2025-03-26 15:28:16.338908 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-03-26 15:28:16.338930 | orchestrator | changed: [testbed-manager] 2025-03-26 15:28:16.439989 | orchestrator | 2025-03-26 15:28:16.440109 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-03-26 15:28:16.440142 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-03-26 15:28:17.780367 | orchestrator | 2025-03-26 15:28:17.780455 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-03-26 15:28:17.780485 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-03-26 15:28:18.470535 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-03-26 15:28:18.470641 | orchestrator | 2025-03-26 15:28:18.470659 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-03-26 15:28:18.470691 | orchestrator | changed: [testbed-manager] 2025-03-26 15:28:18.544498 | orchestrator | 2025-03-26 15:28:18.544606 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-03-26 15:28:18.544638 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:28:18.623517 | orchestrator | 2025-03-26 15:28:18.623593 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-03-26 15:28:18.623621 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-03-26 15:28:20.160830 | orchestrator | 2025-03-26 15:28:20.160949 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-03-26 15:28:20.160986 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-26 15:28:20.898574 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-26 15:28:20.898674 | orchestrator | changed: [testbed-manager] 2025-03-26 15:28:20.898694 | orchestrator | 2025-03-26 15:28:20.898710 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-03-26 15:28:20.898739 | orchestrator | changed: [testbed-manager] 2025-03-26 15:28:21.003113 | orchestrator | 2025-03-26 15:28:21.003171 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-03-26 15:28:21.003198 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-netbox.yml for testbed-manager 2025-03-26 15:28:21.697681 | orchestrator | 2025-03-26 15:28:21.697781 | orchestrator | TASK [osism.services.manager : Copy secret files] ****************************** 2025-03-26 15:28:21.697814 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-26 15:28:22.383124 | orchestrator | changed: [testbed-manager] 2025-03-26 15:28:22.383237 | orchestrator | 2025-03-26 15:28:22.383256 | orchestrator | TASK [osism.services.manager : Copy netbox environment file] ******************* 2025-03-26 15:28:22.383286 | orchestrator | changed: [testbed-manager] 2025-03-26 15:28:22.493244 | orchestrator | 2025-03-26 15:28:22.493354 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-03-26 15:28:22.493387 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-03-26 15:28:23.089654 | orchestrator | 2025-03-26 15:28:23.089765 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-03-26 15:28:23.089801 | orchestrator | changed: [testbed-manager] 2025-03-26 15:28:23.522316 | orchestrator | 2025-03-26 15:28:23.522414 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-03-26 15:28:23.522457 | orchestrator | changed: [testbed-manager] 2025-03-26 15:28:24.881947 | orchestrator | 2025-03-26 15:28:24.882142 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-03-26 15:28:24.882181 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-03-26 15:28:25.575895 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-03-26 15:28:25.576061 | orchestrator | 2025-03-26 15:28:25.576083 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-03-26 15:28:25.576115 | orchestrator | changed: [testbed-manager] 2025-03-26 15:28:25.931375 | orchestrator | 2025-03-26 15:28:25.931458 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-03-26 15:28:25.931487 | orchestrator | ok: [testbed-manager] 2025-03-26 15:28:25.981150 | orchestrator | 2025-03-26 15:28:25.981182 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-03-26 15:28:25.981203 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:28:26.678343 | orchestrator | 2025-03-26 15:28:26.678461 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-03-26 15:28:26.678495 | orchestrator | changed: [testbed-manager] 2025-03-26 15:28:26.814720 | orchestrator | 2025-03-26 15:28:26.814807 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-03-26 15:28:26.814838 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-03-26 15:28:26.877317 | orchestrator | 2025-03-26 15:28:26.877413 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-03-26 15:28:26.877444 | orchestrator | ok: [testbed-manager] 2025-03-26 15:28:29.035213 | orchestrator | 2025-03-26 15:28:29.035330 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-03-26 15:28:29.035363 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-03-26 15:28:29.820818 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-03-26 15:28:29.820924 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-03-26 15:28:29.820944 | orchestrator | 2025-03-26 15:28:29.820959 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-03-26 15:28:29.820992 | orchestrator | changed: [testbed-manager] 2025-03-26 15:28:29.885659 | orchestrator | 2025-03-26 15:28:29.885713 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-03-26 15:28:29.885740 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-03-26 15:28:29.947912 | orchestrator | 2025-03-26 15:28:29.947955 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-03-26 15:28:29.947978 | orchestrator | ok: [testbed-manager] 2025-03-26 15:28:30.728727 | orchestrator | 2025-03-26 15:28:30.728827 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-03-26 15:28:30.728860 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-03-26 15:28:30.827933 | orchestrator | 2025-03-26 15:28:30.827981 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-03-26 15:28:30.828007 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-03-26 15:28:31.661684 | orchestrator | 2025-03-26 15:28:31.661801 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-03-26 15:28:31.661836 | orchestrator | changed: [testbed-manager] 2025-03-26 15:28:32.339353 | orchestrator | 2025-03-26 15:28:32.339469 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-03-26 15:28:32.339502 | orchestrator | ok: [testbed-manager] 2025-03-26 15:28:32.409236 | orchestrator | 2025-03-26 15:28:32.409280 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-03-26 15:28:32.409308 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:28:32.471560 | orchestrator | 2025-03-26 15:28:32.471594 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-03-26 15:28:32.471615 | orchestrator | ok: [testbed-manager] 2025-03-26 15:28:33.413413 | orchestrator | 2025-03-26 15:28:33.413525 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-03-26 15:28:33.413559 | orchestrator | changed: [testbed-manager] 2025-03-26 15:28:57.257005 | orchestrator | 2025-03-26 15:28:57.257147 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-03-26 15:28:57.257185 | orchestrator | changed: [testbed-manager] 2025-03-26 15:28:58.006390 | orchestrator | 2025-03-26 15:28:58.006490 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-03-26 15:28:58.006522 | orchestrator | ok: [testbed-manager] 2025-03-26 15:29:00.615360 | orchestrator | 2025-03-26 15:29:00.615482 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-03-26 15:29:00.615518 | orchestrator | changed: [testbed-manager] 2025-03-26 15:29:00.672318 | orchestrator | 2025-03-26 15:29:00.672352 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-03-26 15:29:00.672373 | orchestrator | ok: [testbed-manager] 2025-03-26 15:29:00.727294 | orchestrator | 2025-03-26 15:29:00.727390 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-03-26 15:29:00.727410 | orchestrator | 2025-03-26 15:29:00.727425 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-03-26 15:29:00.727455 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:30:00.801036 | orchestrator | 2025-03-26 15:30:00.801164 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-03-26 15:30:00.801200 | orchestrator | Pausing for 60 seconds 2025-03-26 15:30:07.603114 | orchestrator | changed: [testbed-manager] 2025-03-26 15:30:07.603227 | orchestrator | 2025-03-26 15:30:07.603247 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-03-26 15:30:07.603279 | orchestrator | changed: [testbed-manager] 2025-03-26 15:30:49.858842 | orchestrator | 2025-03-26 15:30:49.859023 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-03-26 15:30:49.859061 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-03-26 15:30:57.134343 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-03-26 15:30:57.134511 | orchestrator | changed: [testbed-manager] 2025-03-26 15:30:57.134536 | orchestrator | 2025-03-26 15:30:57.134551 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-03-26 15:30:57.134584 | orchestrator | changed: [testbed-manager] 2025-03-26 15:30:57.233587 | orchestrator | 2025-03-26 15:30:57.233665 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-03-26 15:30:57.233695 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-03-26 15:30:57.310242 | orchestrator | 2025-03-26 15:30:57.310314 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-03-26 15:30:57.310331 | orchestrator | 2025-03-26 15:30:57.310359 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-03-26 15:30:57.310385 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:30:57.498277 | orchestrator | 2025-03-26 15:30:57.498337 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-26 15:30:57.498353 | orchestrator | testbed-manager : ok=103 changed=55 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 2025-03-26 15:30:57.498368 | orchestrator | 2025-03-26 15:30:57.498393 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-03-26 15:30:57.504578 | orchestrator | + deactivate 2025-03-26 15:30:57.504606 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-03-26 15:30:57.504621 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-03-26 15:30:57.504635 | orchestrator | + export PATH 2025-03-26 15:30:57.504649 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-03-26 15:30:57.504663 | orchestrator | + '[' -n '' ']' 2025-03-26 15:30:57.504676 | orchestrator | + hash -r 2025-03-26 15:30:57.504770 | orchestrator | + '[' -n '' ']' 2025-03-26 15:30:57.504788 | orchestrator | + unset VIRTUAL_ENV 2025-03-26 15:30:57.504802 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-03-26 15:30:57.504816 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-03-26 15:30:57.504830 | orchestrator | + unset -f deactivate 2025-03-26 15:30:57.504845 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-03-26 15:30:57.504865 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-03-26 15:30:57.540570 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-03-26 15:30:57.540617 | orchestrator | + local max_attempts=60 2025-03-26 15:30:57.540663 | orchestrator | + local name=ceph-ansible 2025-03-26 15:30:57.540678 | orchestrator | + local attempt_num=1 2025-03-26 15:30:57.540704 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-03-26 15:30:57.540726 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-03-26 15:30:57.541286 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-03-26 15:30:57.541328 | orchestrator | + local max_attempts=60 2025-03-26 15:30:57.541346 | orchestrator | + local name=kolla-ansible 2025-03-26 15:30:57.541362 | orchestrator | + local attempt_num=1 2025-03-26 15:30:57.541384 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-03-26 15:30:57.573507 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-03-26 15:30:57.575138 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-03-26 15:30:57.575165 | orchestrator | + local max_attempts=60 2025-03-26 15:30:57.575176 | orchestrator | + local name=osism-ansible 2025-03-26 15:30:57.575186 | orchestrator | + local attempt_num=1 2025-03-26 15:30:57.575202 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-03-26 15:30:57.606783 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-03-26 15:30:59.018200 | orchestrator | + [[ true == \t\r\u\e ]] 2025-03-26 15:30:59.018337 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-03-26 15:30:59.018377 | orchestrator | ++ semver 8.1.0 9.0.0 2025-03-26 15:30:59.065385 | orchestrator | + [[ -1 -ge 0 ]] 2025-03-26 15:30:59.375522 | orchestrator | + [[ 8.1.0 == \l\a\t\e\s\t ]] 2025-03-26 15:30:59.375631 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-03-26 15:30:59.375667 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-03-26 15:30:59.382107 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:8.1.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-03-26 15:30:59.382216 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:8.1.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-03-26 15:30:59.382235 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-03-26 15:30:59.382272 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-03-26 15:30:59.382288 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" beat About a minute ago Up About a minute (healthy) 2025-03-26 15:30:59.382306 | orchestrator | manager-conductor-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" conductor About a minute ago Up About a minute (healthy) 2025-03-26 15:30:59.382321 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" flower About a minute ago Up About a minute (healthy) 2025-03-26 15:30:59.382335 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:8.1.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 51 seconds (healthy) 2025-03-26 15:30:59.382349 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" listener About a minute ago Up About a minute (healthy) 2025-03-26 15:30:59.382364 | orchestrator | manager-mariadb-1 index.docker.io/library/mariadb:11.6.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-03-26 15:30:59.382378 | orchestrator | manager-netbox-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" netbox About a minute ago Up About a minute (healthy) 2025-03-26 15:30:59.382418 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" openstack About a minute ago Up About a minute (healthy) 2025-03-26 15:30:59.382434 | orchestrator | manager-redis-1 index.docker.io/library/redis:7.4.1-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-03-26 15:30:59.382448 | orchestrator | manager-watchdog-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" watchdog About a minute ago Up About a minute (healthy) 2025-03-26 15:30:59.382461 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:8.1.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-03-26 15:30:59.382475 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:8.1.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-03-26 15:30:59.382488 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- sl…" osismclient About a minute ago Up About a minute (healthy) 2025-03-26 15:30:59.382520 | orchestrator | + docker compose --project-directory /opt/netbox ps 2025-03-26 15:30:59.576970 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-03-26 15:30:59.584568 | orchestrator | netbox-netbox-1 registry.osism.tech/osism/netbox:v4.1.7 "/usr/bin/tini -- /o…" netbox 9 minutes ago Up 8 minutes (healthy) 2025-03-26 15:30:59.584601 | orchestrator | netbox-netbox-worker-1 registry.osism.tech/osism/netbox:v4.1.7 "/opt/netbox/venv/bi…" netbox-worker 9 minutes ago Up 3 minutes (healthy) 2025-03-26 15:30:59.584617 | orchestrator | netbox-postgres-1 index.docker.io/library/postgres:16.6-alpine "docker-entrypoint.s…" postgres 9 minutes ago Up 9 minutes (healthy) 5432/tcp 2025-03-26 15:30:59.584631 | orchestrator | netbox-redis-1 index.docker.io/library/redis:7.4.2-alpine "docker-entrypoint.s…" redis 9 minutes ago Up 9 minutes (healthy) 6379/tcp 2025-03-26 15:30:59.584651 | orchestrator | ++ semver 8.1.0 7.0.0 2025-03-26 15:30:59.642504 | orchestrator | + [[ 1 -ge 0 ]] 2025-03-26 15:30:59.645856 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-03-26 15:30:59.645926 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-03-26 15:31:01.496390 | orchestrator | 2025-03-26 15:31:01 | INFO  | Task 764e02f6-12a6-4e8b-a923-d593f50bbe8d (resolvconf) was prepared for execution. 2025-03-26 15:31:04.776762 | orchestrator | 2025-03-26 15:31:01 | INFO  | It takes a moment until task 764e02f6-12a6-4e8b-a923-d593f50bbe8d (resolvconf) has been started and output is visible here. 2025-03-26 15:31:04.776933 | orchestrator | 2025-03-26 15:31:04.778514 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-03-26 15:31:04.778980 | orchestrator | 2025-03-26 15:31:04.779316 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-26 15:31:04.779700 | orchestrator | Wednesday 26 March 2025 15:31:04 +0000 (0:00:00.102) 0:00:00.102 ******* 2025-03-26 15:31:09.824667 | orchestrator | ok: [testbed-manager] 2025-03-26 15:31:09.824875 | orchestrator | 2025-03-26 15:31:09.825043 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-03-26 15:31:09.826956 | orchestrator | Wednesday 26 March 2025 15:31:09 +0000 (0:00:05.053) 0:00:05.156 ******* 2025-03-26 15:31:09.883004 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:31:09.883363 | orchestrator | 2025-03-26 15:31:09.883392 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-03-26 15:31:09.884069 | orchestrator | Wednesday 26 March 2025 15:31:09 +0000 (0:00:00.059) 0:00:05.215 ******* 2025-03-26 15:31:09.974632 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-03-26 15:31:09.975077 | orchestrator | 2025-03-26 15:31:09.975847 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-03-26 15:31:09.975981 | orchestrator | Wednesday 26 March 2025 15:31:09 +0000 (0:00:00.088) 0:00:05.304 ******* 2025-03-26 15:31:10.065734 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-03-26 15:31:10.066341 | orchestrator | 2025-03-26 15:31:10.067240 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-03-26 15:31:10.067998 | orchestrator | Wednesday 26 March 2025 15:31:10 +0000 (0:00:00.092) 0:00:05.396 ******* 2025-03-26 15:31:11.297548 | orchestrator | ok: [testbed-manager] 2025-03-26 15:31:11.299383 | orchestrator | 2025-03-26 15:31:11.299774 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-03-26 15:31:11.300560 | orchestrator | Wednesday 26 March 2025 15:31:11 +0000 (0:00:01.230) 0:00:06.626 ******* 2025-03-26 15:31:11.367011 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:31:11.367836 | orchestrator | 2025-03-26 15:31:11.369209 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-03-26 15:31:11.369840 | orchestrator | Wednesday 26 March 2025 15:31:11 +0000 (0:00:00.071) 0:00:06.697 ******* 2025-03-26 15:31:11.926287 | orchestrator | ok: [testbed-manager] 2025-03-26 15:31:11.927199 | orchestrator | 2025-03-26 15:31:11.927989 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-03-26 15:31:11.928025 | orchestrator | Wednesday 26 March 2025 15:31:11 +0000 (0:00:00.559) 0:00:07.256 ******* 2025-03-26 15:31:12.012555 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:31:12.013381 | orchestrator | 2025-03-26 15:31:12.014358 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-03-26 15:31:12.015649 | orchestrator | Wednesday 26 March 2025 15:31:12 +0000 (0:00:00.083) 0:00:07.341 ******* 2025-03-26 15:31:12.633367 | orchestrator | changed: [testbed-manager] 2025-03-26 15:31:12.634112 | orchestrator | 2025-03-26 15:31:12.634148 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-03-26 15:31:12.634170 | orchestrator | Wednesday 26 March 2025 15:31:12 +0000 (0:00:00.624) 0:00:07.965 ******* 2025-03-26 15:31:13.791729 | orchestrator | changed: [testbed-manager] 2025-03-26 15:31:13.792166 | orchestrator | 2025-03-26 15:31:13.792430 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-03-26 15:31:13.793377 | orchestrator | Wednesday 26 March 2025 15:31:13 +0000 (0:00:01.156) 0:00:09.122 ******* 2025-03-26 15:31:14.830794 | orchestrator | ok: [testbed-manager] 2025-03-26 15:31:14.831815 | orchestrator | 2025-03-26 15:31:14.832684 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-03-26 15:31:14.833829 | orchestrator | Wednesday 26 March 2025 15:31:14 +0000 (0:00:01.039) 0:00:10.161 ******* 2025-03-26 15:31:14.924233 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-03-26 15:31:14.925072 | orchestrator | 2025-03-26 15:31:14.925561 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-03-26 15:31:14.926988 | orchestrator | Wednesday 26 March 2025 15:31:14 +0000 (0:00:00.092) 0:00:10.253 ******* 2025-03-26 15:31:16.209865 | orchestrator | changed: [testbed-manager] 2025-03-26 15:31:16.210657 | orchestrator | 2025-03-26 15:31:16.211383 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-26 15:31:16.212541 | orchestrator | 2025-03-26 15:31:16 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-26 15:31:16.212819 | orchestrator | 2025-03-26 15:31:16 | INFO  | Please wait and do not abort execution. 2025-03-26 15:31:16.214247 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-26 15:31:16.215142 | orchestrator | 2025-03-26 15:31:16.216183 | orchestrator | Wednesday 26 March 2025 15:31:16 +0000 (0:00:01.287) 0:00:11.541 ******* 2025-03-26 15:31:16.217318 | orchestrator | =============================================================================== 2025-03-26 15:31:16.218170 | orchestrator | Gathering Facts --------------------------------------------------------- 5.05s 2025-03-26 15:31:16.219487 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.29s 2025-03-26 15:31:16.219778 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.23s 2025-03-26 15:31:16.221120 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.16s 2025-03-26 15:31:16.221567 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.04s 2025-03-26 15:31:16.221956 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.62s 2025-03-26 15:31:16.222576 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.56s 2025-03-26 15:31:16.224142 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2025-03-26 15:31:16.224985 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2025-03-26 15:31:16.225857 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2025-03-26 15:31:16.226615 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-03-26 15:31:16.227349 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2025-03-26 15:31:16.228284 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2025-03-26 15:31:16.720676 | orchestrator | + osism apply sshconfig 2025-03-26 15:31:18.335272 | orchestrator | 2025-03-26 15:31:18 | INFO  | Task c437be31-bc0a-43be-bd78-c5694f49d27a (sshconfig) was prepared for execution. 2025-03-26 15:31:18.336439 | orchestrator | 2025-03-26 15:31:18 | INFO  | It takes a moment until task c437be31-bc0a-43be-bd78-c5694f49d27a (sshconfig) has been started and output is visible here. 2025-03-26 15:31:21.691666 | orchestrator | 2025-03-26 15:31:21.694322 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-03-26 15:31:21.696294 | orchestrator | 2025-03-26 15:31:21.696956 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-03-26 15:31:21.698287 | orchestrator | Wednesday 26 March 2025 15:31:21 +0000 (0:00:00.120) 0:00:00.120 ******* 2025-03-26 15:31:22.299464 | orchestrator | ok: [testbed-manager] 2025-03-26 15:31:22.300206 | orchestrator | 2025-03-26 15:31:22.301924 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-03-26 15:31:22.850258 | orchestrator | Wednesday 26 March 2025 15:31:22 +0000 (0:00:00.611) 0:00:00.731 ******* 2025-03-26 15:31:22.850386 | orchestrator | changed: [testbed-manager] 2025-03-26 15:31:22.850955 | orchestrator | 2025-03-26 15:31:22.852565 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-03-26 15:31:22.853023 | orchestrator | Wednesday 26 March 2025 15:31:22 +0000 (0:00:00.550) 0:00:01.281 ******* 2025-03-26 15:31:29.196656 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-03-26 15:31:29.198077 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-03-26 15:31:29.198251 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-03-26 15:31:29.198760 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-03-26 15:31:29.199151 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-03-26 15:31:29.199625 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-03-26 15:31:29.199976 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-03-26 15:31:29.200756 | orchestrator | 2025-03-26 15:31:29.201152 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-03-26 15:31:29.201387 | orchestrator | Wednesday 26 March 2025 15:31:29 +0000 (0:00:06.345) 0:00:07.626 ******* 2025-03-26 15:31:29.282012 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:31:29.283144 | orchestrator | 2025-03-26 15:31:29.283183 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-03-26 15:31:29.285274 | orchestrator | Wednesday 26 March 2025 15:31:29 +0000 (0:00:00.087) 0:00:07.714 ******* 2025-03-26 15:31:29.916225 | orchestrator | changed: [testbed-manager] 2025-03-26 15:31:29.916737 | orchestrator | 2025-03-26 15:31:29.918201 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-26 15:31:29.918595 | orchestrator | 2025-03-26 15:31:29 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-26 15:31:29.918766 | orchestrator | 2025-03-26 15:31:29 | INFO  | Please wait and do not abort execution. 2025-03-26 15:31:29.920214 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-26 15:31:29.921200 | orchestrator | 2025-03-26 15:31:29.922725 | orchestrator | Wednesday 26 March 2025 15:31:29 +0000 (0:00:00.633) 0:00:08.348 ******* 2025-03-26 15:31:29.924191 | orchestrator | =============================================================================== 2025-03-26 15:31:29.924780 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 6.35s 2025-03-26 15:31:29.925397 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.63s 2025-03-26 15:31:29.925828 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.61s 2025-03-26 15:31:29.926404 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.55s 2025-03-26 15:31:29.927267 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.09s 2025-03-26 15:31:30.376550 | orchestrator | + osism apply known-hosts 2025-03-26 15:31:31.971314 | orchestrator | 2025-03-26 15:31:31 | INFO  | Task cf770d64-91bf-4a06-88ab-c7be17bdc9ad (known-hosts) was prepared for execution. 2025-03-26 15:31:35.280626 | orchestrator | 2025-03-26 15:31:31 | INFO  | It takes a moment until task cf770d64-91bf-4a06-88ab-c7be17bdc9ad (known-hosts) has been started and output is visible here. 2025-03-26 15:31:35.280763 | orchestrator | 2025-03-26 15:31:35.281541 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-03-26 15:31:35.281570 | orchestrator | 2025-03-26 15:31:35.281603 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-03-26 15:31:35.285319 | orchestrator | Wednesday 26 March 2025 15:31:35 +0000 (0:00:00.128) 0:00:00.128 ******* 2025-03-26 15:31:41.447766 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-03-26 15:31:41.449766 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-03-26 15:31:41.450985 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-03-26 15:31:41.452804 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-03-26 15:31:41.453243 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-03-26 15:31:41.454424 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-03-26 15:31:41.455254 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-03-26 15:31:41.455976 | orchestrator | 2025-03-26 15:31:41.456950 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-03-26 15:31:41.457888 | orchestrator | Wednesday 26 March 2025 15:31:41 +0000 (0:00:06.171) 0:00:06.299 ******* 2025-03-26 15:31:41.630277 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-03-26 15:31:41.630920 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-03-26 15:31:41.631380 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-03-26 15:31:41.632934 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-03-26 15:31:41.633894 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-03-26 15:31:41.634897 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-03-26 15:31:41.635701 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-03-26 15:31:41.637002 | orchestrator | 2025-03-26 15:31:41.637703 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-26 15:31:41.637813 | orchestrator | Wednesday 26 March 2025 15:31:41 +0000 (0:00:00.184) 0:00:06.484 ******* 2025-03-26 15:31:42.911988 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCjPyNXJqQkW6O/QBbDd3/ixHTB54dHpEYQ2U2EZu7Kz2wPVWr9RY4OyXbZ5M/7Qv31oQ7Uk8uT/5JY0+ODoeyv0AE6v8agiuGxXBUArVFRp+hHX7FUFpSvGR8dldl35r8X6Gwc4J0RdfoiPtLk3uWB30CxhrFJQjP4LWkkbb2TUCaW5pggLA1iNn0wOnZj9elWvBjtOi1P0wIpGK4eBOz8YJ6fXsSzvrIf8IpMYjr9RyJBnjZTW9T5T8BSqWYqdh9hGRYhOhoFc3OOdRA7f8OZ8ARzXtkFSqPllilw3EpK4fzC60xU/g+qa16sfb70dEUwy7VveaOuWuaomd91zLluW56dBkCcaMPVqAXtERPs7z7Cwc9Y1xmKvWKH2V2/qWDzNma6XIjHx9yGVouZz4jUzYiTlniqDNw27/DDKPhiX1xUpSqW6ai1SkCwUaH6KxU9C4cAoyPkcCXAAHi8tIdAcNQChGyqik8kxMtPzuZBqXM7tKrAM03eVlDLNUT4e5k=) 2025-03-26 15:31:42.912806 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPXmjt4ggEBsl2DuLYtga51KRctnlsq96Nn99djKk1lOVIW3ZcKXpR1nmpMsSzaKP9jE+0/40XqFdpWn08v+g5w=) 2025-03-26 15:31:42.916081 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDGv08souU/Pq4Veu+LSoVHah8GKH172ktnf4T8lXi+e) 2025-03-26 15:31:42.917159 | orchestrator | 2025-03-26 15:31:42.918187 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-26 15:31:42.919534 | orchestrator | Wednesday 26 March 2025 15:31:42 +0000 (0:00:01.279) 0:00:07.764 ******* 2025-03-26 15:31:44.155443 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCrNe3f2Txpw3BZl/Re5AT0AoyTJ2pUeDl8fjzxm6R9dZrrQYCMrIAvU3QHX3UjJ9nu98Q3C3mzOBUTfkwXkqGEtuue8+rd5h/awhzYgSmlDF0MNYdYJYDhQohx1oiSBeVprOY0ngxR9FaILi5EtLk4ULrjA7rM8sRf4rwXGiF0X/hsoYujhaQuiAqDZ3wOSm9fwuAVn54GbH7N3ZlRwFOH3RnxfceExyQ2TUwISQAvIh3E+DcdDNITZyQZYMr2h5NS0e/0FYnKoG6tZJp+vqqhgy5mwJTd6BfV5Olpk+Wp1j9iF4aMjXQ47AmyOl6l6FjdyDjgYwYdZill63WF7Pv5reqeSTVcYniMb6fbE8kLgb0tfZLhlg9Vq5AEmh9FAes86HSlbmE3E+WJgGAbUd6zfF7NLoZjpZ50NeFiaMDKe4i+YX0D51nf1j/wTV1esvHmxUBFh4rz8uDYLKOsuPqdVV9kNfFM6Ui/hcITR4uZuz36RmfOtbN1monEmcGYjSE=) 2025-03-26 15:31:44.156071 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBH+ezK2YFYOkoTlAe9AWoNWE5rvqfJOLxm9HhVFuNCKlW/SwGiJ4xyhQHqOjsaQZuRitI2oHWJwRKJgv9nAFWAc=) 2025-03-26 15:31:44.156294 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDDLlxDoiuIrcNrgXDAiqXsdW896czV7256eUjHb2lse) 2025-03-26 15:31:44.156805 | orchestrator | 2025-03-26 15:31:44.157719 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-26 15:31:44.157907 | orchestrator | Wednesday 26 March 2025 15:31:44 +0000 (0:00:01.243) 0:00:09.007 ******* 2025-03-26 15:31:45.305036 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtDLTgFs2gwoMxZzGxaXNubVh6lHfN0tlDbMfrXIENJWdoT7I/49oHSRUW+aWUaqyQw7MBEwFEqQYUmSdeEBKiQmCJNgcQZ4MBMgM9cKWNJuu3r9ach6TOhRnSNuX3IgGCHuARJySxqzHAU2fBV9b440FA7o9/sHpDsR/1jjRGyZBK9UmvLgISCZHyLtqvi3NC1y00lIJoktxItEh2r/kOtyl1R5/daN4MkFYlDUUc/Q6+dAiyymL7PIBNCjri/Nuy2Ha4WmvMAOCGMCB5DSPbsKbI3nVt7uTuZQRRM9FAQ94wXlOyL7bqBzOWuhi1k9FHBDeHon5U1NQmS4ETazydPPi0c3d7u6yWLuYM3EFVpKXLVs+jDy4jZYBBNJ7ug27IRA72P9ASl9NxBcF4JKEsR3Dr1mE6u03jEO6xvFnrbVtNh4rBRHP/CarZiLz70gLLp1oI+dJZxVNG3Fxv2EmAN9ie/nnTMOWFf0mWWWZoRmsbzl9L/BZf3vXlkR8uZRk=) 2025-03-26 15:31:45.305575 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDZsP+dLWlKLIlYc/voDGgJf2WZwQWqdAEaFsQCe+BxRqkla8eq78LbvX+t3UCNFMKcQT7RddNzcrxNrbRKuLiI=) 2025-03-26 15:31:45.306466 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJhzHJVIKt26mS4KJ46PG7upcQGY0kBdK3pPMO6OqGdw) 2025-03-26 15:31:45.307195 | orchestrator | 2025-03-26 15:31:45.308161 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-26 15:31:45.308689 | orchestrator | Wednesday 26 March 2025 15:31:45 +0000 (0:00:01.149) 0:00:10.157 ******* 2025-03-26 15:31:46.514879 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZw+7Zy7Xsw7lkMoLwktZo2Q+Wxv0vnVcINodHPdykR024drMC4OKGef+hfdvEHuQAXInPAjv/MiE2vDhYU7mnzprPWujqZs8bbMwrUmq5mLVi9JRyJnuqAWy8oHLbL6Lgc92s0gpTdApPuPtsg3aeRj6QgJ9OUAxSBpYhI/c+DCHuWHPtKVb1a7rGlf8Oq3+erup9IcHyOUOKJURFmhW6p13C04ydkEB6QbCZRzwpxinZRTmo729r+qHrG8R7lXBJW4hxxcC7piMfvTL2nmqABK4Ovxf2yRRw/jzqUgHmyVAKZcuxGSSm1x4vbHFIUJOxhP5sDSSb4PeYSIKgDq5ZCPob5zDdMH0cPllNgzP0OwbMuSjCnkDTWj0UpNavmM7g1/chU8lyrSG9lb6frDWv/wOkcedeA3dYOidMtwwrCUyCFUR68EqI/gx/XgfyH1vF85W2UQv/PMh4CaKNL7M4u/opPwzv+KmSWRHjZ4iDv0eV380ciqgaNqtShUWoyHE=) 2025-03-26 15:31:46.515558 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHsGDQU/uJtc7m2QGIVR5nmS5HtPq75ErnGZNcLAFW2Otl1Rhww0cUyJH+PjacUt1KLKvP4hU6NeBN+yn3GFuIU=) 2025-03-26 15:31:46.517448 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDViIO8Bf7uTBnNjNkX8ASz7fJuMlGKg2cNp/nyzmPJE) 2025-03-26 15:31:46.518449 | orchestrator | 2025-03-26 15:31:46.519236 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-26 15:31:46.520824 | orchestrator | Wednesday 26 March 2025 15:31:46 +0000 (0:00:01.208) 0:00:11.366 ******* 2025-03-26 15:31:47.741245 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDPlqfoa2Y2S3wRpgUnRqRL8wooEtZM+igOxpOIFr0ctw6u5FWSY1vglf9gu++o/3MzggH2JmbsgSynTIWFM5vM8tvhm/fD+c0uioJ3pUs95z7RRZNA0DWPGVl6WovosuhiZXvm4qQf5rDw+W9t/+/KMwJ9jc/Y9YAGq2whCJiIbM3h7AhNkX6jzYQCH3xAxiZ6AkFUPzq+VUDnW0WkbTAuQSLOxStpVPaUcu7cDuX8jpK4nP3HBXfP9Ifsfceeno1V1+P9QXw4HW1ejaLxos6X81tXH8ywq4r7bJylzmz1hSgwt3AXc0wHUDPFqi7cxJQ9oPtGtEoXKgw0YGW7Y8sAEXFSiyZkevV2Mrc+tOTJJdapkP+p2doHJ7J8mr4po6+BfL7yuBNDv1Mhj33UslGy707Ki19UQ+QziCmJ/MWhpN8ccQnGsb6jFdf9K+omIatB3qW5/dtMdNQhOjtb1ZvGrIOXzK9Nd0/cYxkEpUP4LeNd1Zf9Gyyy5u9YpjL70Z8=) 2025-03-26 15:31:47.742204 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBaHDK14Y6mZDy7FGFePO2/v0IxzVE2IZOXZvG9d0q0RR1SiOauoTqSk1brwRCX0xhOE7e5erXwSet6pSJFRNLE=) 2025-03-26 15:31:47.742252 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICVlNrahd5diPwFCh/HBC2C+08JU6rnSgb7GTUaECSz8) 2025-03-26 15:31:47.742413 | orchestrator | 2025-03-26 15:31:47.742440 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-26 15:31:47.742692 | orchestrator | Wednesday 26 March 2025 15:31:47 +0000 (0:00:01.227) 0:00:12.593 ******* 2025-03-26 15:31:48.971795 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZieJWE6Z4vVDGKDKbcJlz5dE1waqwkLxuLoocthzedyerFWr/4STA8nwzzQLJBVUsmYyaIRt6WYeNa0do1Cze5byY14/kVICGp9mjkFZrKuSB646pkmLQutij9YOtvE2zRLweAD3SWZIZw6iwveTGCYloQKiD/v5j3o4Led6cktZssZAlyuQGVrmCoj3pjEvE4baAMUCOLns+YguHEEQpOFd0LJmXSTzhpR6/uy/PJnmnsusqzHe04YvFkZsfWxnR30RWZqt/6yTXrhCoYWzWAcoYbbJ3OtdH8T5vH7FWVGTVbrIY+sJabChICqroQaqQkFfx7hBdnpsYFF+BPl2jvwtsl5lZcmTfASmkVGQUokLWNMYQPMcZ8QXSh7GQZoz20c3h3lfMUDx05BJdfeBpqH8iGJXA6sHLFcJziV6LA4JW9DO76rO6FZ3X1Uz4qH8BBQ0arenYqT9saz6kD3F/YCs2BsruvKCSG/dYy9Rbwf+uaHO8rOhjWnjkwxmS128=) 2025-03-26 15:31:48.972086 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOMji+5ClqYCGbUNVYln3460imLy98vViZS8mUQPKra/NlIwQZTx+8PDyJYjurKN+zOLbXAzYLbq1PWIkvY9Gd4=) 2025-03-26 15:31:48.973224 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE4Ed248HY1f40J5dlqeZBHZGjVPw8RoITDiR6tqAp2d) 2025-03-26 15:31:48.974073 | orchestrator | 2025-03-26 15:31:48.974908 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-26 15:31:48.975921 | orchestrator | Wednesday 26 March 2025 15:31:48 +0000 (0:00:01.229) 0:00:13.823 ******* 2025-03-26 15:31:50.160549 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD0X9CpA0kAtzD9y/F+S9Cy0XKi8a50bd/S9pyChOgNFHmDPIoX28ZhmVIHFMrqGVT67W+oGviPDFQhl9NEqBwAl4QgV8hgiReG5cJBjutz3JRMJLw8L8dFYZ2mMDYMkbY9hW/BtM31geVeTnTgSoDdnB8/EXSChrvyp8JKRh5Pr8hjzJ5ITL+f2lHhDP4Yi07J6EIw8HhFtPodLh364tGBh0TP431aXM+w35umPYCBSHIhO/AA6lT1TUGGjDBygSjQItaWYPaExpWhV86E6TuqEvtzShZCqALfysJgOHbi6cIdf7L6UgD3zlddj1+ow+70dhZO4RTOgXx4cpF0491d4jHQVDGGOBVmFOMi2dtndZnbPBWdmoIq1T/JPVJKj4sRG3rtgrkDguA7YQ7To9HEOo6g0HFC9i9n4/TRm3WihCftFVJKDjOOiv0ut43fs2hPCTBp2nr5DATjEoTZERlLWOkFIKyAvhd1nGANuHjgInW2hcbgvip/0Nmpg4m0ldU=) 2025-03-26 15:31:50.160815 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBevMFhVgyy4PeyFpgy3XLqHZeTIcqH+Iz/ZdI7b70xY) 2025-03-26 15:31:50.161381 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEHZRixWeqjoifez2o4LFaUc2ZMgTOlMV51XxHwGldXq50pFS1sOchUfCiexal4eqVT/RUUbcmjK7TRtGhgvx2s=) 2025-03-26 15:31:50.165442 | orchestrator | 2025-03-26 15:31:50.166045 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-03-26 15:31:50.166382 | orchestrator | Wednesday 26 March 2025 15:31:50 +0000 (0:00:01.189) 0:00:15.012 ******* 2025-03-26 15:31:55.714279 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-03-26 15:31:55.714742 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-03-26 15:31:55.715544 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-03-26 15:31:55.716334 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-03-26 15:31:55.717004 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-03-26 15:31:55.717036 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-03-26 15:31:55.717559 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-03-26 15:31:55.717948 | orchestrator | 2025-03-26 15:31:55.718268 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-03-26 15:31:55.718556 | orchestrator | Wednesday 26 March 2025 15:31:55 +0000 (0:00:05.554) 0:00:20.567 ******* 2025-03-26 15:31:55.911218 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-03-26 15:31:55.911413 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-03-26 15:31:55.911759 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-03-26 15:31:55.911794 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-03-26 15:31:55.914918 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-03-26 15:31:55.915667 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-03-26 15:31:55.915916 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-03-26 15:31:55.915951 | orchestrator | 2025-03-26 15:31:57.087282 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-26 15:31:57.087413 | orchestrator | Wednesday 26 March 2025 15:31:55 +0000 (0:00:00.197) 0:00:20.765 ******* 2025-03-26 15:31:57.087454 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCjPyNXJqQkW6O/QBbDd3/ixHTB54dHpEYQ2U2EZu7Kz2wPVWr9RY4OyXbZ5M/7Qv31oQ7Uk8uT/5JY0+ODoeyv0AE6v8agiuGxXBUArVFRp+hHX7FUFpSvGR8dldl35r8X6Gwc4J0RdfoiPtLk3uWB30CxhrFJQjP4LWkkbb2TUCaW5pggLA1iNn0wOnZj9elWvBjtOi1P0wIpGK4eBOz8YJ6fXsSzvrIf8IpMYjr9RyJBnjZTW9T5T8BSqWYqdh9hGRYhOhoFc3OOdRA7f8OZ8ARzXtkFSqPllilw3EpK4fzC60xU/g+qa16sfb70dEUwy7VveaOuWuaomd91zLluW56dBkCcaMPVqAXtERPs7z7Cwc9Y1xmKvWKH2V2/qWDzNma6XIjHx9yGVouZz4jUzYiTlniqDNw27/DDKPhiX1xUpSqW6ai1SkCwUaH6KxU9C4cAoyPkcCXAAHi8tIdAcNQChGyqik8kxMtPzuZBqXM7tKrAM03eVlDLNUT4e5k=) 2025-03-26 15:31:57.087813 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPXmjt4ggEBsl2DuLYtga51KRctnlsq96Nn99djKk1lOVIW3ZcKXpR1nmpMsSzaKP9jE+0/40XqFdpWn08v+g5w=) 2025-03-26 15:31:57.087895 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDGv08souU/Pq4Veu+LSoVHah8GKH172ktnf4T8lXi+e) 2025-03-26 15:31:57.088498 | orchestrator | 2025-03-26 15:31:57.089035 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-26 15:31:57.089755 | orchestrator | Wednesday 26 March 2025 15:31:57 +0000 (0:00:01.174) 0:00:21.939 ******* 2025-03-26 15:31:58.265483 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDDLlxDoiuIrcNrgXDAiqXsdW896czV7256eUjHb2lse) 2025-03-26 15:31:58.267200 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCrNe3f2Txpw3BZl/Re5AT0AoyTJ2pUeDl8fjzxm6R9dZrrQYCMrIAvU3QHX3UjJ9nu98Q3C3mzOBUTfkwXkqGEtuue8+rd5h/awhzYgSmlDF0MNYdYJYDhQohx1oiSBeVprOY0ngxR9FaILi5EtLk4ULrjA7rM8sRf4rwXGiF0X/hsoYujhaQuiAqDZ3wOSm9fwuAVn54GbH7N3ZlRwFOH3RnxfceExyQ2TUwISQAvIh3E+DcdDNITZyQZYMr2h5NS0e/0FYnKoG6tZJp+vqqhgy5mwJTd6BfV5Olpk+Wp1j9iF4aMjXQ47AmyOl6l6FjdyDjgYwYdZill63WF7Pv5reqeSTVcYniMb6fbE8kLgb0tfZLhlg9Vq5AEmh9FAes86HSlbmE3E+WJgGAbUd6zfF7NLoZjpZ50NeFiaMDKe4i+YX0D51nf1j/wTV1esvHmxUBFh4rz8uDYLKOsuPqdVV9kNfFM6Ui/hcITR4uZuz36RmfOtbN1monEmcGYjSE=) 2025-03-26 15:31:58.268465 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBH+ezK2YFYOkoTlAe9AWoNWE5rvqfJOLxm9HhVFuNCKlW/SwGiJ4xyhQHqOjsaQZuRitI2oHWJwRKJgv9nAFWAc=) 2025-03-26 15:31:58.269443 | orchestrator | 2025-03-26 15:31:58.270217 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-26 15:31:58.271426 | orchestrator | Wednesday 26 March 2025 15:31:58 +0000 (0:00:01.178) 0:00:23.118 ******* 2025-03-26 15:31:59.473499 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtDLTgFs2gwoMxZzGxaXNubVh6lHfN0tlDbMfrXIENJWdoT7I/49oHSRUW+aWUaqyQw7MBEwFEqQYUmSdeEBKiQmCJNgcQZ4MBMgM9cKWNJuu3r9ach6TOhRnSNuX3IgGCHuARJySxqzHAU2fBV9b440FA7o9/sHpDsR/1jjRGyZBK9UmvLgISCZHyLtqvi3NC1y00lIJoktxItEh2r/kOtyl1R5/daN4MkFYlDUUc/Q6+dAiyymL7PIBNCjri/Nuy2Ha4WmvMAOCGMCB5DSPbsKbI3nVt7uTuZQRRM9FAQ94wXlOyL7bqBzOWuhi1k9FHBDeHon5U1NQmS4ETazydPPi0c3d7u6yWLuYM3EFVpKXLVs+jDy4jZYBBNJ7ug27IRA72P9ASl9NxBcF4JKEsR3Dr1mE6u03jEO6xvFnrbVtNh4rBRHP/CarZiLz70gLLp1oI+dJZxVNG3Fxv2EmAN9ie/nnTMOWFf0mWWWZoRmsbzl9L/BZf3vXlkR8uZRk=) 2025-03-26 15:31:59.474382 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDZsP+dLWlKLIlYc/voDGgJf2WZwQWqdAEaFsQCe+BxRqkla8eq78LbvX+t3UCNFMKcQT7RddNzcrxNrbRKuLiI=) 2025-03-26 15:31:59.474428 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJhzHJVIKt26mS4KJ46PG7upcQGY0kBdK3pPMO6OqGdw) 2025-03-26 15:31:59.474454 | orchestrator | 2025-03-26 15:31:59.476523 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-26 15:32:00.717766 | orchestrator | Wednesday 26 March 2025 15:31:59 +0000 (0:00:01.208) 0:00:24.326 ******* 2025-03-26 15:32:00.717967 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDViIO8Bf7uTBnNjNkX8ASz7fJuMlGKg2cNp/nyzmPJE) 2025-03-26 15:32:00.719168 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZw+7Zy7Xsw7lkMoLwktZo2Q+Wxv0vnVcINodHPdykR024drMC4OKGef+hfdvEHuQAXInPAjv/MiE2vDhYU7mnzprPWujqZs8bbMwrUmq5mLVi9JRyJnuqAWy8oHLbL6Lgc92s0gpTdApPuPtsg3aeRj6QgJ9OUAxSBpYhI/c+DCHuWHPtKVb1a7rGlf8Oq3+erup9IcHyOUOKJURFmhW6p13C04ydkEB6QbCZRzwpxinZRTmo729r+qHrG8R7lXBJW4hxxcC7piMfvTL2nmqABK4Ovxf2yRRw/jzqUgHmyVAKZcuxGSSm1x4vbHFIUJOxhP5sDSSb4PeYSIKgDq5ZCPob5zDdMH0cPllNgzP0OwbMuSjCnkDTWj0UpNavmM7g1/chU8lyrSG9lb6frDWv/wOkcedeA3dYOidMtwwrCUyCFUR68EqI/gx/XgfyH1vF85W2UQv/PMh4CaKNL7M4u/opPwzv+KmSWRHjZ4iDv0eV380ciqgaNqtShUWoyHE=) 2025-03-26 15:32:00.719688 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHsGDQU/uJtc7m2QGIVR5nmS5HtPq75ErnGZNcLAFW2Otl1Rhww0cUyJH+PjacUt1KLKvP4hU6NeBN+yn3GFuIU=) 2025-03-26 15:32:00.720216 | orchestrator | 2025-03-26 15:32:00.720984 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-26 15:32:00.721307 | orchestrator | Wednesday 26 March 2025 15:32:00 +0000 (0:00:01.243) 0:00:25.570 ******* 2025-03-26 15:32:01.887322 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBaHDK14Y6mZDy7FGFePO2/v0IxzVE2IZOXZvG9d0q0RR1SiOauoTqSk1brwRCX0xhOE7e5erXwSet6pSJFRNLE=) 2025-03-26 15:32:01.887713 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDPlqfoa2Y2S3wRpgUnRqRL8wooEtZM+igOxpOIFr0ctw6u5FWSY1vglf9gu++o/3MzggH2JmbsgSynTIWFM5vM8tvhm/fD+c0uioJ3pUs95z7RRZNA0DWPGVl6WovosuhiZXvm4qQf5rDw+W9t/+/KMwJ9jc/Y9YAGq2whCJiIbM3h7AhNkX6jzYQCH3xAxiZ6AkFUPzq+VUDnW0WkbTAuQSLOxStpVPaUcu7cDuX8jpK4nP3HBXfP9Ifsfceeno1V1+P9QXw4HW1ejaLxos6X81tXH8ywq4r7bJylzmz1hSgwt3AXc0wHUDPFqi7cxJQ9oPtGtEoXKgw0YGW7Y8sAEXFSiyZkevV2Mrc+tOTJJdapkP+p2doHJ7J8mr4po6+BfL7yuBNDv1Mhj33UslGy707Ki19UQ+QziCmJ/MWhpN8ccQnGsb6jFdf9K+omIatB3qW5/dtMdNQhOjtb1ZvGrIOXzK9Nd0/cYxkEpUP4LeNd1Zf9Gyyy5u9YpjL70Z8=) 2025-03-26 15:32:01.888459 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICVlNrahd5diPwFCh/HBC2C+08JU6rnSgb7GTUaECSz8) 2025-03-26 15:32:01.888980 | orchestrator | 2025-03-26 15:32:01.889685 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-26 15:32:01.889976 | orchestrator | Wednesday 26 March 2025 15:32:01 +0000 (0:00:01.169) 0:00:26.739 ******* 2025-03-26 15:32:03.076732 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE4Ed248HY1f40J5dlqeZBHZGjVPw8RoITDiR6tqAp2d) 2025-03-26 15:32:03.077430 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZieJWE6Z4vVDGKDKbcJlz5dE1waqwkLxuLoocthzedyerFWr/4STA8nwzzQLJBVUsmYyaIRt6WYeNa0do1Cze5byY14/kVICGp9mjkFZrKuSB646pkmLQutij9YOtvE2zRLweAD3SWZIZw6iwveTGCYloQKiD/v5j3o4Led6cktZssZAlyuQGVrmCoj3pjEvE4baAMUCOLns+YguHEEQpOFd0LJmXSTzhpR6/uy/PJnmnsusqzHe04YvFkZsfWxnR30RWZqt/6yTXrhCoYWzWAcoYbbJ3OtdH8T5vH7FWVGTVbrIY+sJabChICqroQaqQkFfx7hBdnpsYFF+BPl2jvwtsl5lZcmTfASmkVGQUokLWNMYQPMcZ8QXSh7GQZoz20c3h3lfMUDx05BJdfeBpqH8iGJXA6sHLFcJziV6LA4JW9DO76rO6FZ3X1Uz4qH8BBQ0arenYqT9saz6kD3F/YCs2BsruvKCSG/dYy9Rbwf+uaHO8rOhjWnjkwxmS128=) 2025-03-26 15:32:03.078296 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOMji+5ClqYCGbUNVYln3460imLy98vViZS8mUQPKra/NlIwQZTx+8PDyJYjurKN+zOLbXAzYLbq1PWIkvY9Gd4=) 2025-03-26 15:32:03.078506 | orchestrator | 2025-03-26 15:32:03.079358 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-03-26 15:32:03.080939 | orchestrator | Wednesday 26 March 2025 15:32:03 +0000 (0:00:01.189) 0:00:27.928 ******* 2025-03-26 15:32:04.307115 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD0X9CpA0kAtzD9y/F+S9Cy0XKi8a50bd/S9pyChOgNFHmDPIoX28ZhmVIHFMrqGVT67W+oGviPDFQhl9NEqBwAl4QgV8hgiReG5cJBjutz3JRMJLw8L8dFYZ2mMDYMkbY9hW/BtM31geVeTnTgSoDdnB8/EXSChrvyp8JKRh5Pr8hjzJ5ITL+f2lHhDP4Yi07J6EIw8HhFtPodLh364tGBh0TP431aXM+w35umPYCBSHIhO/AA6lT1TUGGjDBygSjQItaWYPaExpWhV86E6TuqEvtzShZCqALfysJgOHbi6cIdf7L6UgD3zlddj1+ow+70dhZO4RTOgXx4cpF0491d4jHQVDGGOBVmFOMi2dtndZnbPBWdmoIq1T/JPVJKj4sRG3rtgrkDguA7YQ7To9HEOo6g0HFC9i9n4/TRm3WihCftFVJKDjOOiv0ut43fs2hPCTBp2nr5DATjEoTZERlLWOkFIKyAvhd1nGANuHjgInW2hcbgvip/0Nmpg4m0ldU=) 2025-03-26 15:32:04.307905 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEHZRixWeqjoifez2o4LFaUc2ZMgTOlMV51XxHwGldXq50pFS1sOchUfCiexal4eqVT/RUUbcmjK7TRtGhgvx2s=) 2025-03-26 15:32:04.307953 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBevMFhVgyy4PeyFpgy3XLqHZeTIcqH+Iz/ZdI7b70xY) 2025-03-26 15:32:04.308990 | orchestrator | 2025-03-26 15:32:04.309750 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-03-26 15:32:04.310607 | orchestrator | Wednesday 26 March 2025 15:32:04 +0000 (0:00:01.229) 0:00:29.158 ******* 2025-03-26 15:32:04.516569 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-03-26 15:32:04.517702 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-03-26 15:32:04.519036 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-03-26 15:32:04.519724 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-03-26 15:32:04.521729 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-03-26 15:32:04.523081 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-03-26 15:32:04.524392 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-03-26 15:32:04.525228 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:32:04.525977 | orchestrator | 2025-03-26 15:32:04.527068 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-03-26 15:32:04.528171 | orchestrator | Wednesday 26 March 2025 15:32:04 +0000 (0:00:00.211) 0:00:29.369 ******* 2025-03-26 15:32:04.608798 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:32:04.609432 | orchestrator | 2025-03-26 15:32:04.610253 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-03-26 15:32:04.612255 | orchestrator | Wednesday 26 March 2025 15:32:04 +0000 (0:00:00.093) 0:00:29.462 ******* 2025-03-26 15:32:04.686775 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:32:04.688091 | orchestrator | 2025-03-26 15:32:04.689390 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-03-26 15:32:04.690429 | orchestrator | Wednesday 26 March 2025 15:32:04 +0000 (0:00:00.076) 0:00:29.539 ******* 2025-03-26 15:32:05.544979 | orchestrator | changed: [testbed-manager] 2025-03-26 15:32:05.545689 | orchestrator | 2025-03-26 15:32:05.546412 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-26 15:32:05.547127 | orchestrator | 2025-03-26 15:32:05 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-26 15:32:05.547569 | orchestrator | 2025-03-26 15:32:05 | INFO  | Please wait and do not abort execution. 2025-03-26 15:32:05.548779 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-26 15:32:05.550183 | orchestrator | 2025-03-26 15:32:05.551127 | orchestrator | Wednesday 26 March 2025 15:32:05 +0000 (0:00:00.857) 0:00:30.397 ******* 2025-03-26 15:32:05.552171 | orchestrator | =============================================================================== 2025-03-26 15:32:05.554391 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.17s 2025-03-26 15:32:05.555351 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.55s 2025-03-26 15:32:05.556343 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.28s 2025-03-26 15:32:05.557281 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.24s 2025-03-26 15:32:05.558208 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.24s 2025-03-26 15:32:05.559043 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.23s 2025-03-26 15:32:05.559345 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.23s 2025-03-26 15:32:05.560234 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.23s 2025-03-26 15:32:05.560450 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.21s 2025-03-26 15:32:05.561290 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.21s 2025-03-26 15:32:05.561424 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.19s 2025-03-26 15:32:05.562613 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.19s 2025-03-26 15:32:05.562873 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2025-03-26 15:32:05.563463 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2025-03-26 15:32:05.564236 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2025-03-26 15:32:05.564474 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2025-03-26 15:32:05.565394 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.86s 2025-03-26 15:32:05.566101 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.21s 2025-03-26 15:32:05.567295 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.20s 2025-03-26 15:32:05.568156 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2025-03-26 15:32:06.040863 | orchestrator | + osism apply squid 2025-03-26 15:32:07.675061 | orchestrator | 2025-03-26 15:32:07 | INFO  | Task e4680ce7-66e7-45ed-9bf7-62ec0d7ec3e5 (squid) was prepared for execution. 2025-03-26 15:32:11.052189 | orchestrator | 2025-03-26 15:32:07 | INFO  | It takes a moment until task e4680ce7-66e7-45ed-9bf7-62ec0d7ec3e5 (squid) has been started and output is visible here. 2025-03-26 15:32:11.052346 | orchestrator | 2025-03-26 15:32:11.053238 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-03-26 15:32:11.053394 | orchestrator | 2025-03-26 15:32:11.055055 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-03-26 15:32:11.055094 | orchestrator | Wednesday 26 March 2025 15:32:11 +0000 (0:00:00.111) 0:00:00.111 ******* 2025-03-26 15:32:11.155536 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-03-26 15:32:11.155777 | orchestrator | 2025-03-26 15:32:12.784423 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-03-26 15:32:12.784542 | orchestrator | Wednesday 26 March 2025 15:32:11 +0000 (0:00:00.107) 0:00:00.219 ******* 2025-03-26 15:32:12.784574 | orchestrator | ok: [testbed-manager] 2025-03-26 15:32:12.785033 | orchestrator | 2025-03-26 15:32:12.785160 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-03-26 15:32:12.785678 | orchestrator | Wednesday 26 March 2025 15:32:12 +0000 (0:00:01.628) 0:00:01.847 ******* 2025-03-26 15:32:14.098727 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-03-26 15:32:14.100164 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-03-26 15:32:14.100205 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-03-26 15:32:14.100838 | orchestrator | 2025-03-26 15:32:14.103010 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-03-26 15:32:14.104140 | orchestrator | Wednesday 26 March 2025 15:32:14 +0000 (0:00:01.313) 0:00:03.161 ******* 2025-03-26 15:32:15.325761 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-03-26 15:32:15.326269 | orchestrator | 2025-03-26 15:32:15.327051 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-03-26 15:32:15.327935 | orchestrator | Wednesday 26 March 2025 15:32:15 +0000 (0:00:01.227) 0:00:04.388 ******* 2025-03-26 15:32:15.737691 | orchestrator | ok: [testbed-manager] 2025-03-26 15:32:15.739796 | orchestrator | 2025-03-26 15:32:15.740963 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-03-26 15:32:15.741406 | orchestrator | Wednesday 26 March 2025 15:32:15 +0000 (0:00:00.414) 0:00:04.803 ******* 2025-03-26 15:32:16.809341 | orchestrator | changed: [testbed-manager] 2025-03-26 15:32:16.810101 | orchestrator | 2025-03-26 15:32:16.810925 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-03-26 15:32:16.811673 | orchestrator | Wednesday 26 March 2025 15:32:16 +0000 (0:00:01.069) 0:00:05.873 ******* 2025-03-26 15:32:45.082848 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-03-26 15:32:45.083420 | orchestrator | ok: [testbed-manager] 2025-03-26 15:32:45.083455 | orchestrator | 2025-03-26 15:32:45.083470 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-03-26 15:32:45.083492 | orchestrator | Wednesday 26 March 2025 15:32:45 +0000 (0:00:28.269) 0:00:34.142 ******* 2025-03-26 15:32:57.434615 | orchestrator | changed: [testbed-manager] 2025-03-26 15:32:57.434810 | orchestrator | 2025-03-26 15:32:57.435778 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-03-26 15:32:57.435797 | orchestrator | Wednesday 26 March 2025 15:32:57 +0000 (0:00:12.353) 0:00:46.496 ******* 2025-03-26 15:33:57.523040 | orchestrator | Pausing for 60 seconds 2025-03-26 15:33:57.597806 | orchestrator | changed: [testbed-manager] 2025-03-26 15:33:57.597857 | orchestrator | 2025-03-26 15:33:57.597874 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-03-26 15:33:57.597890 | orchestrator | Wednesday 26 March 2025 15:33:57 +0000 (0:01:00.088) 0:01:46.584 ******* 2025-03-26 15:33:57.597915 | orchestrator | ok: [testbed-manager] 2025-03-26 15:33:57.598356 | orchestrator | 2025-03-26 15:33:57.598800 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-03-26 15:33:57.599636 | orchestrator | Wednesday 26 March 2025 15:33:57 +0000 (0:00:00.078) 0:01:46.663 ******* 2025-03-26 15:33:58.323506 | orchestrator | changed: [testbed-manager] 2025-03-26 15:33:58.324428 | orchestrator | 2025-03-26 15:33:58.326493 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-26 15:33:58.327373 | orchestrator | 2025-03-26 15:33:58 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-26 15:33:58.327402 | orchestrator | 2025-03-26 15:33:58 | INFO  | Please wait and do not abort execution. 2025-03-26 15:33:58.327424 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-26 15:33:58.328698 | orchestrator | 2025-03-26 15:33:58.329806 | orchestrator | Wednesday 26 March 2025 15:33:58 +0000 (0:00:00.724) 0:01:47.387 ******* 2025-03-26 15:33:58.330907 | orchestrator | =============================================================================== 2025-03-26 15:33:58.331504 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2025-03-26 15:33:58.332248 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 28.27s 2025-03-26 15:33:58.333098 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.35s 2025-03-26 15:33:58.333830 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.63s 2025-03-26 15:33:58.334397 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.31s 2025-03-26 15:33:58.335224 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.23s 2025-03-26 15:33:58.335680 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 1.07s 2025-03-26 15:33:58.336218 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.72s 2025-03-26 15:33:58.336830 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.41s 2025-03-26 15:33:58.337280 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.11s 2025-03-26 15:33:58.337769 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.08s 2025-03-26 15:33:58.838981 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-03-26 15:33:58.850705 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-03-26 15:33:58.850775 | orchestrator | ++ semver 8.1.0 9.0.0 2025-03-26 15:33:58.903904 | orchestrator | + [[ -1 -lt 0 ]] 2025-03-26 15:33:58.910384 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-03-26 15:33:58.910412 | orchestrator | + sed -i 's|^# \(network_dispatcher_scripts:\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml 2025-03-26 15:33:58.910434 | orchestrator | + sed -i 's|^# \( - src: /opt/configuration/network/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-03-26 15:33:58.914119 | orchestrator | + sed -i 's|^# \( dest: routable.d/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-03-26 15:33:58.918218 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-03-26 15:34:00.589092 | orchestrator | 2025-03-26 15:34:00 | INFO  | Task 87b05269-c09d-45b0-8d33-f277012b8cdb (operator) was prepared for execution. 2025-03-26 15:34:03.810603 | orchestrator | 2025-03-26 15:34:00 | INFO  | It takes a moment until task 87b05269-c09d-45b0-8d33-f277012b8cdb (operator) has been started and output is visible here. 2025-03-26 15:34:03.810806 | orchestrator | 2025-03-26 15:34:03.814488 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-03-26 15:34:03.814609 | orchestrator | 2025-03-26 15:34:03.814631 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-03-26 15:34:03.814663 | orchestrator | Wednesday 26 March 2025 15:34:03 +0000 (0:00:00.095) 0:00:00.095 ******* 2025-03-26 15:34:07.480820 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:34:07.481886 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:34:07.484729 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:34:07.485116 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:34:07.490096 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:34:07.491231 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:34:07.494093 | orchestrator | 2025-03-26 15:34:07.494861 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-03-26 15:34:07.496894 | orchestrator | Wednesday 26 March 2025 15:34:07 +0000 (0:00:03.673) 0:00:03.768 ******* 2025-03-26 15:34:08.448915 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:34:08.452220 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:34:08.452258 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:34:08.453408 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:34:08.453854 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:34:08.454943 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:34:08.455854 | orchestrator | 2025-03-26 15:34:08.456156 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-03-26 15:34:08.456963 | orchestrator | 2025-03-26 15:34:08.457708 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-03-26 15:34:08.458255 | orchestrator | Wednesday 26 March 2025 15:34:08 +0000 (0:00:00.967) 0:00:04.736 ******* 2025-03-26 15:34:08.521615 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:34:08.555804 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:34:08.576900 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:34:08.650109 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:34:08.650983 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:34:08.654493 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:34:08.654932 | orchestrator | 2025-03-26 15:34:08.655480 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-03-26 15:34:08.656224 | orchestrator | Wednesday 26 March 2025 15:34:08 +0000 (0:00:00.201) 0:00:04.937 ******* 2025-03-26 15:34:08.717616 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:34:08.768366 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:34:08.826187 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:34:08.827293 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:34:08.829148 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:34:08.829463 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:34:08.830377 | orchestrator | 2025-03-26 15:34:08.830900 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-03-26 15:34:08.833792 | orchestrator | Wednesday 26 March 2025 15:34:08 +0000 (0:00:00.178) 0:00:05.115 ******* 2025-03-26 15:34:09.604254 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:34:09.604913 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:34:09.605686 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:34:09.607347 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:34:09.608229 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:34:09.608873 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:34:09.610101 | orchestrator | 2025-03-26 15:34:09.610419 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-03-26 15:34:09.611642 | orchestrator | Wednesday 26 March 2025 15:34:09 +0000 (0:00:00.774) 0:00:05.890 ******* 2025-03-26 15:34:10.462008 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:34:10.462445 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:34:10.463294 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:34:10.466191 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:34:10.466985 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:34:10.467011 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:34:10.467027 | orchestrator | 2025-03-26 15:34:10.467049 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-03-26 15:34:10.470386 | orchestrator | Wednesday 26 March 2025 15:34:10 +0000 (0:00:00.859) 0:00:06.750 ******* 2025-03-26 15:34:11.749946 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-03-26 15:34:11.750609 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-03-26 15:34:11.751216 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-03-26 15:34:11.752426 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-03-26 15:34:11.752913 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-03-26 15:34:11.753504 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-03-26 15:34:11.753830 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-03-26 15:34:11.755417 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-03-26 15:34:11.758350 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-03-26 15:34:11.758946 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-03-26 15:34:11.759485 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-03-26 15:34:11.759808 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-03-26 15:34:11.760288 | orchestrator | 2025-03-26 15:34:11.760609 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-03-26 15:34:11.761005 | orchestrator | Wednesday 26 March 2025 15:34:11 +0000 (0:00:01.282) 0:00:08.033 ******* 2025-03-26 15:34:13.115039 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:34:13.118338 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:34:13.118975 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:34:13.119091 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:34:13.119111 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:34:13.119127 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:34:13.119143 | orchestrator | 2025-03-26 15:34:13.119169 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-03-26 15:34:13.119239 | orchestrator | Wednesday 26 March 2025 15:34:13 +0000 (0:00:01.370) 0:00:09.403 ******* 2025-03-26 15:34:14.373902 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-03-26 15:34:14.375692 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-03-26 15:34:14.376773 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-03-26 15:34:14.646772 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-03-26 15:34:14.647543 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-03-26 15:34:14.648635 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-03-26 15:34:14.651860 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-03-26 15:34:14.653400 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-03-26 15:34:14.654661 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-03-26 15:34:14.655820 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-03-26 15:34:14.657155 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-03-26 15:34:14.657593 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-03-26 15:34:14.658381 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-03-26 15:34:14.659327 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-03-26 15:34:14.660210 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-03-26 15:34:14.660784 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-03-26 15:34:14.662799 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-03-26 15:34:14.663076 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-03-26 15:34:14.663790 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-03-26 15:34:14.665924 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-03-26 15:34:14.666844 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-03-26 15:34:14.667259 | orchestrator | 2025-03-26 15:34:14.667952 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-03-26 15:34:14.668605 | orchestrator | Wednesday 26 March 2025 15:34:14 +0000 (0:00:01.529) 0:00:10.932 ******* 2025-03-26 15:34:15.454412 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:34:15.455697 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:34:15.455763 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:34:15.455785 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:34:15.456969 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:34:15.457246 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:34:15.457652 | orchestrator | 2025-03-26 15:34:15.458160 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-03-26 15:34:15.458792 | orchestrator | Wednesday 26 March 2025 15:34:15 +0000 (0:00:00.810) 0:00:11.742 ******* 2025-03-26 15:34:15.536210 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:34:15.569041 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:34:15.591458 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:34:15.664998 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:34:15.666358 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:34:15.667923 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:34:15.671868 | orchestrator | 2025-03-26 15:34:15.672850 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-03-26 15:34:15.673642 | orchestrator | Wednesday 26 March 2025 15:34:15 +0000 (0:00:00.211) 0:00:11.953 ******* 2025-03-26 15:34:16.422357 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-03-26 15:34:16.424899 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:34:16.425621 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-03-26 15:34:16.426198 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:34:16.426500 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-03-26 15:34:16.427825 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:34:16.428268 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-03-26 15:34:16.428293 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:34:16.428857 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-03-26 15:34:16.429494 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:34:16.429710 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-03-26 15:34:16.430744 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:34:16.430889 | orchestrator | 2025-03-26 15:34:16.430914 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-03-26 15:34:16.431415 | orchestrator | Wednesday 26 March 2025 15:34:16 +0000 (0:00:00.755) 0:00:12.709 ******* 2025-03-26 15:34:16.465617 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:34:16.485912 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:34:16.507174 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:34:16.531348 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:34:16.573822 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:34:16.575058 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:34:16.575990 | orchestrator | 2025-03-26 15:34:16.576748 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-03-26 15:34:16.577643 | orchestrator | Wednesday 26 March 2025 15:34:16 +0000 (0:00:00.149) 0:00:12.859 ******* 2025-03-26 15:34:16.628282 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:34:16.664820 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:34:16.683466 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:34:16.718163 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:34:16.756932 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:34:16.757049 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:34:16.757069 | orchestrator | 2025-03-26 15:34:16.757088 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-03-26 15:34:16.758064 | orchestrator | Wednesday 26 March 2025 15:34:16 +0000 (0:00:00.182) 0:00:13.042 ******* 2025-03-26 15:34:16.800645 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:34:16.846115 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:34:16.868229 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:34:16.909201 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:34:16.911922 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:34:16.913480 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:34:16.914286 | orchestrator | 2025-03-26 15:34:16.915571 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-03-26 15:34:16.916527 | orchestrator | Wednesday 26 March 2025 15:34:16 +0000 (0:00:00.154) 0:00:13.197 ******* 2025-03-26 15:34:17.646186 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:34:17.647032 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:34:17.647067 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:34:17.647088 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:34:17.648155 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:34:17.648941 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:34:17.649690 | orchestrator | 2025-03-26 15:34:17.650448 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-03-26 15:34:17.651090 | orchestrator | Wednesday 26 March 2025 15:34:17 +0000 (0:00:00.732) 0:00:13.929 ******* 2025-03-26 15:34:17.741087 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:34:17.771266 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:34:17.789747 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:34:17.915223 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:34:17.915986 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:34:17.917388 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:34:17.918261 | orchestrator | 2025-03-26 15:34:17.918815 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-26 15:34:17.919223 | orchestrator | 2025-03-26 15:34:17 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-26 15:34:17.920682 | orchestrator | 2025-03-26 15:34:17 | INFO  | Please wait and do not abort execution. 2025-03-26 15:34:17.921325 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-26 15:34:17.922100 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-26 15:34:17.923840 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-26 15:34:17.925163 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-26 15:34:17.926097 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-26 15:34:17.927312 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-26 15:34:17.928337 | orchestrator | 2025-03-26 15:34:17.928929 | orchestrator | Wednesday 26 March 2025 15:34:17 +0000 (0:00:00.274) 0:00:14.204 ******* 2025-03-26 15:34:17.929600 | orchestrator | =============================================================================== 2025-03-26 15:34:17.930553 | orchestrator | Gathering Facts --------------------------------------------------------- 3.67s 2025-03-26 15:34:17.930757 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.53s 2025-03-26 15:34:17.931823 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.37s 2025-03-26 15:34:17.932167 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.28s 2025-03-26 15:34:17.932857 | orchestrator | Do not require tty for all users ---------------------------------------- 0.97s 2025-03-26 15:34:17.934109 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.86s 2025-03-26 15:34:17.935103 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.81s 2025-03-26 15:34:17.935846 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.77s 2025-03-26 15:34:17.936417 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.76s 2025-03-26 15:34:17.936942 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.73s 2025-03-26 15:34:17.937381 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.27s 2025-03-26 15:34:17.937745 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.21s 2025-03-26 15:34:17.939455 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.20s 2025-03-26 15:34:17.939544 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.18s 2025-03-26 15:34:17.940062 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.18s 2025-03-26 15:34:17.940682 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.15s 2025-03-26 15:34:17.941088 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.15s 2025-03-26 15:34:18.428680 | orchestrator | + osism apply --environment custom facts 2025-03-26 15:34:19.906633 | orchestrator | 2025-03-26 15:34:19 | INFO  | Trying to run play facts in environment custom 2025-03-26 15:34:19.960530 | orchestrator | 2025-03-26 15:34:19 | INFO  | Task 92efaf8b-8403-4483-96eb-cd5d25852988 (facts) was prepared for execution. 2025-03-26 15:34:23.961811 | orchestrator | 2025-03-26 15:34:19 | INFO  | It takes a moment until task 92efaf8b-8403-4483-96eb-cd5d25852988 (facts) has been started and output is visible here. 2025-03-26 15:34:23.961954 | orchestrator | 2025-03-26 15:34:23.967573 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-03-26 15:34:23.968319 | orchestrator | 2025-03-26 15:34:23.968751 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-03-26 15:34:23.969155 | orchestrator | Wednesday 26 March 2025 15:34:23 +0000 (0:00:00.136) 0:00:00.136 ******* 2025-03-26 15:34:25.185913 | orchestrator | ok: [testbed-manager] 2025-03-26 15:34:26.364577 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:34:26.365824 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:34:26.365910 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:34:26.366753 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:34:26.367031 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:34:26.367493 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:34:26.367778 | orchestrator | 2025-03-26 15:34:26.368053 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-03-26 15:34:26.368797 | orchestrator | Wednesday 26 March 2025 15:34:26 +0000 (0:00:02.408) 0:00:02.544 ******* 2025-03-26 15:34:27.663153 | orchestrator | ok: [testbed-manager] 2025-03-26 15:34:28.607809 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:34:28.608007 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:34:28.608775 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:34:28.609225 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:34:28.610785 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:34:28.611264 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:34:28.611434 | orchestrator | 2025-03-26 15:34:28.612129 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-03-26 15:34:28.612365 | orchestrator | 2025-03-26 15:34:28.612919 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-03-26 15:34:28.615357 | orchestrator | Wednesday 26 March 2025 15:34:28 +0000 (0:00:02.239) 0:00:04.784 ******* 2025-03-26 15:34:28.749603 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:34:28.750205 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:34:28.751408 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:34:28.752057 | orchestrator | 2025-03-26 15:34:28.755389 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-03-26 15:34:28.755491 | orchestrator | Wednesday 26 March 2025 15:34:28 +0000 (0:00:00.144) 0:00:04.929 ******* 2025-03-26 15:34:28.890674 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:34:28.892920 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:34:28.894391 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:34:28.894432 | orchestrator | 2025-03-26 15:34:28.894457 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-03-26 15:34:29.043660 | orchestrator | Wednesday 26 March 2025 15:34:28 +0000 (0:00:00.143) 0:00:05.072 ******* 2025-03-26 15:34:29.043772 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:34:29.050477 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:34:29.055463 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:34:29.056380 | orchestrator | 2025-03-26 15:34:29.056871 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-03-26 15:34:29.059530 | orchestrator | Wednesday 26 March 2025 15:34:29 +0000 (0:00:00.150) 0:00:05.223 ******* 2025-03-26 15:34:29.204906 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-03-26 15:34:29.205025 | orchestrator | 2025-03-26 15:34:29.205780 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-03-26 15:34:29.208687 | orchestrator | Wednesday 26 March 2025 15:34:29 +0000 (0:00:00.160) 0:00:05.384 ******* 2025-03-26 15:34:29.738909 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:34:29.739752 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:34:29.743446 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:34:29.744643 | orchestrator | 2025-03-26 15:34:29.745453 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-03-26 15:34:29.747478 | orchestrator | Wednesday 26 March 2025 15:34:29 +0000 (0:00:00.535) 0:00:05.919 ******* 2025-03-26 15:34:29.847992 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:34:29.849381 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:34:29.849890 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:34:29.850529 | orchestrator | 2025-03-26 15:34:29.850976 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-03-26 15:34:29.854161 | orchestrator | Wednesday 26 March 2025 15:34:29 +0000 (0:00:00.108) 0:00:06.028 ******* 2025-03-26 15:34:30.911231 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:34:30.911880 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:34:30.911922 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:34:30.912498 | orchestrator | 2025-03-26 15:34:30.913590 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-03-26 15:34:30.914699 | orchestrator | Wednesday 26 March 2025 15:34:30 +0000 (0:00:01.063) 0:00:07.091 ******* 2025-03-26 15:34:31.412008 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:34:31.412114 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:34:31.412122 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:34:31.412127 | orchestrator | 2025-03-26 15:34:31.412135 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-03-26 15:34:31.412971 | orchestrator | Wednesday 26 March 2025 15:34:31 +0000 (0:00:00.496) 0:00:07.588 ******* 2025-03-26 15:34:32.475241 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:34:32.475481 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:34:32.482007 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:34:46.309700 | orchestrator | 2025-03-26 15:34:46.309894 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-03-26 15:34:46.309915 | orchestrator | Wednesday 26 March 2025 15:34:32 +0000 (0:00:01.068) 0:00:08.656 ******* 2025-03-26 15:34:46.309947 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:34:46.311577 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:34:46.311697 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:34:46.311716 | orchestrator | 2025-03-26 15:34:46.311763 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-03-26 15:34:46.311798 | orchestrator | Wednesday 26 March 2025 15:34:46 +0000 (0:00:13.823) 0:00:22.479 ******* 2025-03-26 15:34:46.362337 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:34:46.417108 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:34:46.417342 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:34:46.418805 | orchestrator | 2025-03-26 15:34:46.419191 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-03-26 15:34:46.419491 | orchestrator | Wednesday 26 March 2025 15:34:46 +0000 (0:00:00.120) 0:00:22.599 ******* 2025-03-26 15:34:55.292052 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:34:55.293630 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:34:55.293664 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:34:55.293687 | orchestrator | 2025-03-26 15:34:55.296395 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-03-26 15:34:55.297101 | orchestrator | Wednesday 26 March 2025 15:34:55 +0000 (0:00:08.867) 0:00:31.467 ******* 2025-03-26 15:34:55.759633 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:34:55.759860 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:34:55.761368 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:34:55.762779 | orchestrator | 2025-03-26 15:34:55.763664 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-03-26 15:34:55.764851 | orchestrator | Wednesday 26 March 2025 15:34:55 +0000 (0:00:00.471) 0:00:31.939 ******* 2025-03-26 15:34:59.413544 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-03-26 15:34:59.413928 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-03-26 15:34:59.413968 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-03-26 15:34:59.414826 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-03-26 15:34:59.415561 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-03-26 15:34:59.415589 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-03-26 15:34:59.416439 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-03-26 15:34:59.417543 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-03-26 15:34:59.418243 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-03-26 15:34:59.418636 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-03-26 15:34:59.422080 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-03-26 15:34:59.422112 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-03-26 15:34:59.422224 | orchestrator | 2025-03-26 15:34:59.422247 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-03-26 15:34:59.423734 | orchestrator | Wednesday 26 March 2025 15:34:59 +0000 (0:00:03.653) 0:00:35.593 ******* 2025-03-26 15:35:00.784536 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:35:00.789646 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:35:00.789691 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:35:00.789714 | orchestrator | 2025-03-26 15:35:00.790338 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-03-26 15:35:00.790628 | orchestrator | 2025-03-26 15:35:00.791645 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-03-26 15:35:00.791708 | orchestrator | Wednesday 26 March 2025 15:35:00 +0000 (0:00:01.370) 0:00:36.963 ******* 2025-03-26 15:35:02.596798 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:35:06.107785 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:35:06.108618 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:35:06.109894 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:35:06.111091 | orchestrator | ok: [testbed-manager] 2025-03-26 15:35:06.111851 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:35:06.112288 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:35:06.113738 | orchestrator | 2025-03-26 15:35:06.115229 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-26 15:35:06.115272 | orchestrator | 2025-03-26 15:35:06 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-26 15:35:06.115495 | orchestrator | 2025-03-26 15:35:06 | INFO  | Please wait and do not abort execution. 2025-03-26 15:35:06.115522 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-26 15:35:06.116475 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-26 15:35:06.117291 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-26 15:35:06.118328 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-26 15:35:06.118853 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-26 15:35:06.119924 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-26 15:35:06.120380 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-26 15:35:06.120989 | orchestrator | 2025-03-26 15:35:06.121772 | orchestrator | Wednesday 26 March 2025 15:35:06 +0000 (0:00:05.324) 0:00:42.287 ******* 2025-03-26 15:35:06.122712 | orchestrator | =============================================================================== 2025-03-26 15:35:06.123416 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.82s 2025-03-26 15:35:06.123671 | orchestrator | Install required packages (Debian) -------------------------------------- 8.87s 2025-03-26 15:35:06.124927 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.32s 2025-03-26 15:35:06.125787 | orchestrator | Copy fact files --------------------------------------------------------- 3.65s 2025-03-26 15:35:06.126438 | orchestrator | Create custom facts directory ------------------------------------------- 2.41s 2025-03-26 15:35:06.127325 | orchestrator | Copy fact file ---------------------------------------------------------- 2.24s 2025-03-26 15:35:06.128060 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.37s 2025-03-26 15:35:06.128809 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.07s 2025-03-26 15:35:06.129956 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.06s 2025-03-26 15:35:06.130828 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.54s 2025-03-26 15:35:06.131566 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.50s 2025-03-26 15:35:06.132156 | orchestrator | Create custom facts directory ------------------------------------------- 0.47s 2025-03-26 15:35:06.132620 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.16s 2025-03-26 15:35:06.133139 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.15s 2025-03-26 15:35:06.133809 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.14s 2025-03-26 15:35:06.134176 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.14s 2025-03-26 15:35:06.134660 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.12s 2025-03-26 15:35:06.135036 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.11s 2025-03-26 15:35:06.645961 | orchestrator | + osism apply bootstrap 2025-03-26 15:35:08.275588 | orchestrator | 2025-03-26 15:35:08 | INFO  | Task e56c7a32-161d-444a-87ef-e99016fb7f4f (bootstrap) was prepared for execution. 2025-03-26 15:35:11.835151 | orchestrator | 2025-03-26 15:35:08 | INFO  | It takes a moment until task e56c7a32-161d-444a-87ef-e99016fb7f4f (bootstrap) has been started and output is visible here. 2025-03-26 15:35:11.835293 | orchestrator | 2025-03-26 15:35:11.835359 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-03-26 15:35:11.835506 | orchestrator | 2025-03-26 15:35:11.840510 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-03-26 15:35:11.841508 | orchestrator | Wednesday 26 March 2025 15:35:11 +0000 (0:00:00.110) 0:00:00.110 ******* 2025-03-26 15:35:11.940512 | orchestrator | ok: [testbed-manager] 2025-03-26 15:35:11.970810 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:35:12.006244 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:35:12.093985 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:35:12.094836 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:35:12.095645 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:35:12.096719 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:35:12.098065 | orchestrator | 2025-03-26 15:35:12.098918 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-03-26 15:35:12.100079 | orchestrator | 2025-03-26 15:35:12.100846 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-03-26 15:35:12.101258 | orchestrator | Wednesday 26 March 2025 15:35:12 +0000 (0:00:00.261) 0:00:00.372 ******* 2025-03-26 15:35:15.912998 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:35:15.914250 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:35:15.915598 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:35:15.916603 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:35:15.917523 | orchestrator | ok: [testbed-manager] 2025-03-26 15:35:15.918503 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:35:15.920020 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:35:15.920991 | orchestrator | 2025-03-26 15:35:15.922105 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-03-26 15:35:15.922461 | orchestrator | 2025-03-26 15:35:15.923669 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-03-26 15:35:15.925076 | orchestrator | Wednesday 26 March 2025 15:35:15 +0000 (0:00:03.819) 0:00:04.191 ******* 2025-03-26 15:35:16.024737 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-03-26 15:35:16.027051 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-03-26 15:35:16.027150 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-03-26 15:35:16.078228 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-03-26 15:35:16.078786 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-03-26 15:35:16.078987 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-03-26 15:35:16.079721 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-03-26 15:35:16.080040 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-03-26 15:35:16.081816 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-03-26 15:35:16.082727 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-03-26 15:35:16.082946 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-03-26 15:35:16.083219 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-03-26 15:35:16.131005 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-03-26 15:35:16.132943 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-03-26 15:35:16.133897 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-03-26 15:35:16.134125 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-03-26 15:35:16.134497 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-03-26 15:35:16.415215 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-03-26 15:35:16.416227 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-03-26 15:35:16.417129 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-03-26 15:35:16.417718 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-03-26 15:35:16.418541 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-03-26 15:35:16.419724 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-03-26 15:35:16.420790 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:35:16.422239 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-03-26 15:35:16.423270 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-03-26 15:35:16.423719 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-03-26 15:35:16.424971 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-03-26 15:35:16.425714 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-03-26 15:35:16.426693 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:35:16.427514 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-03-26 15:35:16.428377 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:35:16.429007 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-03-26 15:35:16.429933 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-03-26 15:35:16.430617 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-03-26 15:35:16.431385 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-03-26 15:35:16.431751 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-03-26 15:35:16.432478 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-03-26 15:35:16.433776 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-03-26 15:35:16.434450 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:35:16.434775 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-03-26 15:35:16.435370 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-03-26 15:35:16.435926 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-03-26 15:35:16.436600 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-03-26 15:35:16.438114 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-03-26 15:35:16.439064 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-03-26 15:35:16.439923 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-03-26 15:35:16.441086 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:35:16.441554 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-03-26 15:35:16.442007 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-03-26 15:35:16.442477 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-03-26 15:35:16.443151 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-03-26 15:35:16.443390 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:35:16.444146 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-03-26 15:35:16.444870 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-03-26 15:35:16.445609 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:35:16.446625 | orchestrator | 2025-03-26 15:35:16.447098 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-03-26 15:35:16.447835 | orchestrator | 2025-03-26 15:35:16.448319 | orchestrator | TASK [osism.commons.hostname : Set hostname_name fact] ************************* 2025-03-26 15:35:16.449041 | orchestrator | Wednesday 26 March 2025 15:35:16 +0000 (0:00:00.502) 0:00:04.693 ******* 2025-03-26 15:35:16.470988 | orchestrator | ok: [testbed-manager] 2025-03-26 15:35:16.522341 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:35:16.562073 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:35:16.589871 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:35:16.653750 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:35:16.654260 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:35:16.655185 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:35:16.655892 | orchestrator | 2025-03-26 15:35:16.656658 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-03-26 15:35:16.657122 | orchestrator | Wednesday 26 March 2025 15:35:16 +0000 (0:00:00.238) 0:00:04.931 ******* 2025-03-26 15:35:17.911965 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:35:17.912141 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:35:17.912425 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:35:17.913494 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:35:17.914590 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:35:17.915038 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:35:17.915945 | orchestrator | ok: [testbed-manager] 2025-03-26 15:35:17.916581 | orchestrator | 2025-03-26 15:35:17.917073 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-03-26 15:35:17.917814 | orchestrator | Wednesday 26 March 2025 15:35:17 +0000 (0:00:01.257) 0:00:06.188 ******* 2025-03-26 15:35:19.251286 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:35:19.251494 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:35:19.252817 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:35:19.253799 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:35:19.255571 | orchestrator | ok: [testbed-manager] 2025-03-26 15:35:19.256387 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:35:19.256420 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:35:19.256995 | orchestrator | 2025-03-26 15:35:19.257888 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-03-26 15:35:19.258287 | orchestrator | Wednesday 26 March 2025 15:35:19 +0000 (0:00:01.335) 0:00:07.524 ******* 2025-03-26 15:35:19.538958 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 15:35:19.539336 | orchestrator | 2025-03-26 15:35:19.539785 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-03-26 15:35:19.540561 | orchestrator | Wednesday 26 March 2025 15:35:19 +0000 (0:00:00.291) 0:00:07.816 ******* 2025-03-26 15:35:21.621394 | orchestrator | changed: [testbed-manager] 2025-03-26 15:35:21.623900 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:35:21.623978 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:35:21.623998 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:35:21.624017 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:35:21.626463 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:35:21.627676 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:35:21.628862 | orchestrator | 2025-03-26 15:35:21.629645 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-03-26 15:35:21.630388 | orchestrator | Wednesday 26 March 2025 15:35:21 +0000 (0:00:02.079) 0:00:09.896 ******* 2025-03-26 15:35:21.695631 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:35:21.899158 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 15:35:21.900813 | orchestrator | 2025-03-26 15:35:21.901987 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-03-26 15:35:21.902686 | orchestrator | Wednesday 26 March 2025 15:35:21 +0000 (0:00:00.279) 0:00:10.176 ******* 2025-03-26 15:35:22.987071 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:35:22.987429 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:35:22.988998 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:35:22.989552 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:35:22.990143 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:35:22.991100 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:35:22.991608 | orchestrator | 2025-03-26 15:35:22.992422 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-03-26 15:35:22.993712 | orchestrator | Wednesday 26 March 2025 15:35:22 +0000 (0:00:01.085) 0:00:11.262 ******* 2025-03-26 15:35:23.063739 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:35:23.631671 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:35:23.632821 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:35:23.632861 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:35:23.633616 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:35:23.634659 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:35:23.636006 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:35:23.637887 | orchestrator | 2025-03-26 15:35:23.638160 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-03-26 15:35:23.639044 | orchestrator | Wednesday 26 March 2025 15:35:23 +0000 (0:00:00.646) 0:00:11.908 ******* 2025-03-26 15:35:23.781001 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:35:23.804121 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:35:24.111197 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:35:24.112078 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:35:24.114493 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:35:24.116444 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:35:24.117895 | orchestrator | ok: [testbed-manager] 2025-03-26 15:35:24.118815 | orchestrator | 2025-03-26 15:35:24.119843 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-03-26 15:35:24.120944 | orchestrator | Wednesday 26 March 2025 15:35:24 +0000 (0:00:00.480) 0:00:12.388 ******* 2025-03-26 15:35:24.190452 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:35:24.212879 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:35:24.239132 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:35:24.266440 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:35:24.339815 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:35:24.340853 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:35:24.341735 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:35:24.342691 | orchestrator | 2025-03-26 15:35:24.343042 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-03-26 15:35:24.344251 | orchestrator | Wednesday 26 March 2025 15:35:24 +0000 (0:00:00.228) 0:00:12.617 ******* 2025-03-26 15:35:24.652990 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 15:35:24.653111 | orchestrator | 2025-03-26 15:35:24.653140 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-03-26 15:35:24.653715 | orchestrator | Wednesday 26 March 2025 15:35:24 +0000 (0:00:00.311) 0:00:12.929 ******* 2025-03-26 15:35:24.976134 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 15:35:24.976249 | orchestrator | 2025-03-26 15:35:24.976272 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-03-26 15:35:24.976489 | orchestrator | Wednesday 26 March 2025 15:35:24 +0000 (0:00:00.322) 0:00:13.252 ******* 2025-03-26 15:35:26.555172 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:35:26.555338 | orchestrator | ok: [testbed-manager] 2025-03-26 15:35:26.555931 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:35:26.557367 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:35:26.558367 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:35:26.559142 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:35:26.560155 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:35:26.560917 | orchestrator | 2025-03-26 15:35:26.561420 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-03-26 15:35:26.562248 | orchestrator | Wednesday 26 March 2025 15:35:26 +0000 (0:00:01.578) 0:00:14.831 ******* 2025-03-26 15:35:26.640712 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:35:26.673630 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:35:26.711315 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:35:26.748439 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:35:26.837002 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:35:26.837323 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:35:26.837357 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:35:26.838208 | orchestrator | 2025-03-26 15:35:26.839362 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-03-26 15:35:26.840302 | orchestrator | Wednesday 26 March 2025 15:35:26 +0000 (0:00:00.275) 0:00:15.106 ******* 2025-03-26 15:35:27.508006 | orchestrator | ok: [testbed-manager] 2025-03-26 15:35:27.508360 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:35:27.508966 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:35:27.510495 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:35:27.510723 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:35:27.511288 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:35:27.512203 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:35:27.512802 | orchestrator | 2025-03-26 15:35:27.513788 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-03-26 15:35:27.514202 | orchestrator | Wednesday 26 March 2025 15:35:27 +0000 (0:00:00.677) 0:00:15.784 ******* 2025-03-26 15:35:27.626724 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:35:27.654173 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:35:27.685981 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:35:27.772260 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:35:27.773035 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:35:27.773938 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:35:27.774619 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:35:27.775381 | orchestrator | 2025-03-26 15:35:27.775966 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-03-26 15:35:27.776712 | orchestrator | Wednesday 26 March 2025 15:35:27 +0000 (0:00:00.265) 0:00:16.050 ******* 2025-03-26 15:35:28.334505 | orchestrator | ok: [testbed-manager] 2025-03-26 15:35:28.335432 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:35:28.335539 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:35:28.338998 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:35:28.340924 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:35:28.341902 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:35:28.343555 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:35:28.346950 | orchestrator | 2025-03-26 15:35:29.582503 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-03-26 15:35:29.582619 | orchestrator | Wednesday 26 March 2025 15:35:28 +0000 (0:00:00.559) 0:00:16.609 ******* 2025-03-26 15:35:29.582652 | orchestrator | ok: [testbed-manager] 2025-03-26 15:35:29.583809 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:35:29.585495 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:35:29.586721 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:35:29.587885 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:35:29.588793 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:35:29.589971 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:35:29.590984 | orchestrator | 2025-03-26 15:35:29.591850 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-03-26 15:35:29.592356 | orchestrator | Wednesday 26 March 2025 15:35:29 +0000 (0:00:01.247) 0:00:17.857 ******* 2025-03-26 15:35:31.766877 | orchestrator | ok: [testbed-manager] 2025-03-26 15:35:31.767747 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:35:31.770871 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:35:31.771687 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:35:31.772551 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:35:31.773602 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:35:31.774543 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:35:31.775828 | orchestrator | 2025-03-26 15:35:31.776701 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-03-26 15:35:31.777274 | orchestrator | Wednesday 26 March 2025 15:35:31 +0000 (0:00:02.184) 0:00:20.042 ******* 2025-03-26 15:35:32.110173 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 15:35:32.110333 | orchestrator | 2025-03-26 15:35:32.111612 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-03-26 15:35:32.114513 | orchestrator | Wednesday 26 March 2025 15:35:32 +0000 (0:00:00.344) 0:00:20.386 ******* 2025-03-26 15:35:32.184143 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:35:33.689908 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:35:33.690737 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:35:33.690797 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:35:33.690821 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:35:33.691930 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:35:33.692439 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:35:33.694959 | orchestrator | 2025-03-26 15:35:33.695627 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-03-26 15:35:33.696096 | orchestrator | Wednesday 26 March 2025 15:35:33 +0000 (0:00:01.576) 0:00:21.963 ******* 2025-03-26 15:35:33.780979 | orchestrator | ok: [testbed-manager] 2025-03-26 15:35:33.812975 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:35:33.848698 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:35:33.883051 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:35:33.960224 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:35:33.960902 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:35:33.961358 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:35:33.961904 | orchestrator | 2025-03-26 15:35:33.962977 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-03-26 15:35:33.963469 | orchestrator | Wednesday 26 March 2025 15:35:33 +0000 (0:00:00.274) 0:00:22.237 ******* 2025-03-26 15:35:34.043638 | orchestrator | ok: [testbed-manager] 2025-03-26 15:35:34.070249 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:35:34.140368 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:35:34.227856 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:35:34.228522 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:35:34.229304 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:35:34.230102 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:35:34.230636 | orchestrator | 2025-03-26 15:35:34.230959 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-03-26 15:35:34.231964 | orchestrator | Wednesday 26 March 2025 15:35:34 +0000 (0:00:00.268) 0:00:22.505 ******* 2025-03-26 15:35:34.308168 | orchestrator | ok: [testbed-manager] 2025-03-26 15:35:34.336823 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:35:34.363455 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:35:34.391705 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:35:34.470284 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:35:34.471420 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:35:34.472424 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:35:34.472562 | orchestrator | 2025-03-26 15:35:34.473527 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-03-26 15:35:34.473905 | orchestrator | Wednesday 26 March 2025 15:35:34 +0000 (0:00:00.242) 0:00:22.747 ******* 2025-03-26 15:35:34.821375 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 15:35:34.824100 | orchestrator | 2025-03-26 15:35:34.825599 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-03-26 15:35:34.826215 | orchestrator | Wednesday 26 March 2025 15:35:34 +0000 (0:00:00.347) 0:00:23.095 ******* 2025-03-26 15:35:35.400976 | orchestrator | ok: [testbed-manager] 2025-03-26 15:35:35.401599 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:35:35.402608 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:35:35.404307 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:35:35.405362 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:35:35.405639 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:35:35.406269 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:35:35.406839 | orchestrator | 2025-03-26 15:35:35.407684 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-03-26 15:35:35.407930 | orchestrator | Wednesday 26 March 2025 15:35:35 +0000 (0:00:00.582) 0:00:23.678 ******* 2025-03-26 15:35:35.480162 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:35:35.517863 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:35:35.554298 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:35:35.584601 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:35:35.667835 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:35:35.673722 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:35:35.676038 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:35:35.676707 | orchestrator | 2025-03-26 15:35:35.677268 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-03-26 15:35:35.678259 | orchestrator | Wednesday 26 March 2025 15:35:35 +0000 (0:00:00.265) 0:00:23.943 ******* 2025-03-26 15:35:36.937141 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:35:36.939212 | orchestrator | changed: [testbed-manager] 2025-03-26 15:35:36.944176 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:35:36.946897 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:35:36.946942 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:35:36.948006 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:35:36.948032 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:35:36.948047 | orchestrator | 2025-03-26 15:35:36.948068 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-03-26 15:35:36.948970 | orchestrator | Wednesday 26 March 2025 15:35:36 +0000 (0:00:01.268) 0:00:25.212 ******* 2025-03-26 15:35:37.610253 | orchestrator | ok: [testbed-manager] 2025-03-26 15:35:37.613440 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:35:37.613475 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:35:37.614356 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:35:37.615020 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:35:37.615892 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:35:37.616946 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:35:37.617392 | orchestrator | 2025-03-26 15:35:37.618429 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-03-26 15:35:37.619270 | orchestrator | Wednesday 26 March 2025 15:35:37 +0000 (0:00:00.673) 0:00:25.885 ******* 2025-03-26 15:35:38.856817 | orchestrator | ok: [testbed-manager] 2025-03-26 15:35:38.861178 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:35:38.863743 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:35:38.863801 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:35:38.865932 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:35:38.866698 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:35:38.867472 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:35:38.869039 | orchestrator | 2025-03-26 15:35:38.870275 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-03-26 15:35:38.871030 | orchestrator | Wednesday 26 March 2025 15:35:38 +0000 (0:00:01.242) 0:00:27.127 ******* 2025-03-26 15:35:53.078909 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:35:53.080620 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:35:53.080654 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:35:53.080676 | orchestrator | changed: [testbed-manager] 2025-03-26 15:35:53.083227 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:35:53.083655 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:35:53.084500 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:35:53.086588 | orchestrator | 2025-03-26 15:35:53.087397 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-03-26 15:35:53.088172 | orchestrator | Wednesday 26 March 2025 15:35:53 +0000 (0:00:14.223) 0:00:41.350 ******* 2025-03-26 15:35:53.157607 | orchestrator | ok: [testbed-manager] 2025-03-26 15:35:53.188180 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:35:53.215344 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:35:53.246818 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:35:53.309886 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:35:53.310886 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:35:53.311669 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:35:53.312906 | orchestrator | 2025-03-26 15:35:53.313187 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-03-26 15:35:53.314122 | orchestrator | Wednesday 26 March 2025 15:35:53 +0000 (0:00:00.236) 0:00:41.587 ******* 2025-03-26 15:35:53.402758 | orchestrator | ok: [testbed-manager] 2025-03-26 15:35:53.429896 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:35:53.461361 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:35:53.484833 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:35:53.569017 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:35:53.569193 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:35:53.569679 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:35:53.569704 | orchestrator | 2025-03-26 15:35:53.570102 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-03-26 15:35:53.570663 | orchestrator | Wednesday 26 March 2025 15:35:53 +0000 (0:00:00.259) 0:00:41.846 ******* 2025-03-26 15:35:53.645670 | orchestrator | ok: [testbed-manager] 2025-03-26 15:35:53.678267 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:35:53.704006 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:35:53.736413 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:35:53.814306 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:35:53.815188 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:35:53.817000 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:35:53.818473 | orchestrator | 2025-03-26 15:35:53.818982 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-03-26 15:35:53.819994 | orchestrator | Wednesday 26 March 2025 15:35:53 +0000 (0:00:00.243) 0:00:42.090 ******* 2025-03-26 15:35:54.122157 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 15:35:54.123214 | orchestrator | 2025-03-26 15:35:54.124945 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-03-26 15:35:54.125816 | orchestrator | Wednesday 26 March 2025 15:35:54 +0000 (0:00:00.306) 0:00:42.397 ******* 2025-03-26 15:35:56.081240 | orchestrator | ok: [testbed-manager] 2025-03-26 15:35:56.081430 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:35:56.082205 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:35:56.083234 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:35:56.083859 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:35:56.083880 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:35:56.083897 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:35:56.084278 | orchestrator | 2025-03-26 15:35:56.084301 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-03-26 15:35:56.087671 | orchestrator | Wednesday 26 March 2025 15:35:56 +0000 (0:00:01.960) 0:00:44.357 ******* 2025-03-26 15:35:57.282293 | orchestrator | changed: [testbed-manager] 2025-03-26 15:35:57.283918 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:35:57.283943 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:35:57.283962 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:35:57.285015 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:35:57.285484 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:35:57.286573 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:35:57.287243 | orchestrator | 2025-03-26 15:35:57.288059 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-03-26 15:35:57.288506 | orchestrator | Wednesday 26 March 2025 15:35:57 +0000 (0:00:01.196) 0:00:45.554 ******* 2025-03-26 15:35:58.189725 | orchestrator | ok: [testbed-manager] 2025-03-26 15:35:58.190008 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:35:58.191117 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:35:58.192134 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:35:58.192359 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:35:58.193172 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:35:58.193605 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:35:58.194521 | orchestrator | 2025-03-26 15:35:58.195013 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-03-26 15:35:58.195499 | orchestrator | Wednesday 26 March 2025 15:35:58 +0000 (0:00:00.911) 0:00:46.466 ******* 2025-03-26 15:35:58.556062 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 15:35:58.560219 | orchestrator | 2025-03-26 15:35:58.561329 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-03-26 15:35:58.561993 | orchestrator | Wednesday 26 March 2025 15:35:58 +0000 (0:00:00.365) 0:00:46.831 ******* 2025-03-26 15:35:59.777385 | orchestrator | changed: [testbed-manager] 2025-03-26 15:35:59.777775 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:35:59.778518 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:35:59.779529 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:35:59.779933 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:35:59.780837 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:35:59.781142 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:35:59.782405 | orchestrator | 2025-03-26 15:35:59.783347 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-03-26 15:35:59.783979 | orchestrator | Wednesday 26 March 2025 15:35:59 +0000 (0:00:01.221) 0:00:48.053 ******* 2025-03-26 15:35:59.869661 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:35:59.909486 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:35:59.948913 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:35:59.986466 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:36:00.198290 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:36:00.201540 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:36:00.203159 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:36:00.204639 | orchestrator | 2025-03-26 15:36:00.205935 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-03-26 15:36:00.206938 | orchestrator | Wednesday 26 March 2025 15:36:00 +0000 (0:00:00.417) 0:00:48.470 ******* 2025-03-26 15:36:13.247697 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:36:13.247929 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:36:13.247956 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:36:13.247977 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:36:13.249723 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:36:13.250226 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:36:13.251107 | orchestrator | changed: [testbed-manager] 2025-03-26 15:36:13.251741 | orchestrator | 2025-03-26 15:36:13.252688 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-03-26 15:36:13.253172 | orchestrator | Wednesday 26 March 2025 15:36:13 +0000 (0:00:13.046) 0:01:01.516 ******* 2025-03-26 15:36:14.678462 | orchestrator | ok: [testbed-manager] 2025-03-26 15:36:14.679181 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:36:14.680222 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:36:14.681038 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:36:14.682885 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:36:14.684075 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:36:14.685142 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:36:14.686080 | orchestrator | 2025-03-26 15:36:14.686976 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-03-26 15:36:14.687992 | orchestrator | Wednesday 26 March 2025 15:36:14 +0000 (0:00:01.436) 0:01:02.953 ******* 2025-03-26 15:36:15.752474 | orchestrator | ok: [testbed-manager] 2025-03-26 15:36:15.755745 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:36:15.756896 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:36:15.757374 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:36:15.758448 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:36:15.759546 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:36:15.760428 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:36:15.761495 | orchestrator | 2025-03-26 15:36:15.762177 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-03-26 15:36:15.762953 | orchestrator | Wednesday 26 March 2025 15:36:15 +0000 (0:00:01.075) 0:01:04.028 ******* 2025-03-26 15:36:15.855765 | orchestrator | ok: [testbed-manager] 2025-03-26 15:36:15.883045 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:36:15.915745 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:36:15.995673 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:36:15.996242 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:36:15.996967 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:36:15.997611 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:36:15.998756 | orchestrator | 2025-03-26 15:36:15.999520 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-03-26 15:36:16.000405 | orchestrator | Wednesday 26 March 2025 15:36:15 +0000 (0:00:00.244) 0:01:04.272 ******* 2025-03-26 15:36:16.073383 | orchestrator | ok: [testbed-manager] 2025-03-26 15:36:16.105046 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:36:16.134647 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:36:16.172719 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:36:16.246915 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:36:16.247648 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:36:16.248759 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:36:16.249508 | orchestrator | 2025-03-26 15:36:16.253240 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-03-26 15:36:16.253750 | orchestrator | Wednesday 26 March 2025 15:36:16 +0000 (0:00:00.251) 0:01:04.524 ******* 2025-03-26 15:36:16.592555 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 15:36:16.593538 | orchestrator | 2025-03-26 15:36:16.594667 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-03-26 15:36:16.595459 | orchestrator | Wednesday 26 March 2025 15:36:16 +0000 (0:00:00.344) 0:01:04.869 ******* 2025-03-26 15:36:18.556869 | orchestrator | ok: [testbed-manager] 2025-03-26 15:36:18.557239 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:36:18.557302 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:36:18.557514 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:36:18.558121 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:36:18.558705 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:36:18.560849 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:36:18.561879 | orchestrator | 2025-03-26 15:36:18.562367 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-03-26 15:36:18.563737 | orchestrator | Wednesday 26 March 2025 15:36:18 +0000 (0:00:01.964) 0:01:06.833 ******* 2025-03-26 15:36:19.177346 | orchestrator | changed: [testbed-manager] 2025-03-26 15:36:19.178520 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:36:19.180719 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:36:19.180775 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:36:19.181447 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:36:19.182251 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:36:19.183202 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:36:19.183844 | orchestrator | 2025-03-26 15:36:19.184986 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-03-26 15:36:19.185832 | orchestrator | Wednesday 26 March 2025 15:36:19 +0000 (0:00:00.618) 0:01:07.452 ******* 2025-03-26 15:36:19.267578 | orchestrator | ok: [testbed-manager] 2025-03-26 15:36:19.296763 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:36:19.324686 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:36:19.372667 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:36:19.443919 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:36:19.443981 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:36:19.445198 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:36:19.445653 | orchestrator | 2025-03-26 15:36:19.447058 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-03-26 15:36:20.852113 | orchestrator | Wednesday 26 March 2025 15:36:19 +0000 (0:00:00.267) 0:01:07.720 ******* 2025-03-26 15:36:20.852227 | orchestrator | ok: [testbed-manager] 2025-03-26 15:36:20.855555 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:36:20.856476 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:36:20.857498 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:36:20.858139 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:36:20.858989 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:36:20.859952 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:36:20.860605 | orchestrator | 2025-03-26 15:36:20.861400 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-03-26 15:36:20.861939 | orchestrator | Wednesday 26 March 2025 15:36:20 +0000 (0:00:01.407) 0:01:09.127 ******* 2025-03-26 15:36:22.819122 | orchestrator | changed: [testbed-manager] 2025-03-26 15:36:22.819317 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:36:22.819927 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:36:22.820052 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:36:22.820693 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:36:22.821141 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:36:22.823672 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:36:22.824482 | orchestrator | 2025-03-26 15:36:22.826399 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-03-26 15:36:22.827495 | orchestrator | Wednesday 26 March 2025 15:36:22 +0000 (0:00:01.966) 0:01:11.094 ******* 2025-03-26 15:36:25.496728 | orchestrator | ok: [testbed-manager] 2025-03-26 15:36:25.498074 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:36:25.498122 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:36:25.499145 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:36:25.501674 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:36:25.502623 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:36:25.503342 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:36:25.504114 | orchestrator | 2025-03-26 15:36:25.504447 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-03-26 15:36:25.505249 | orchestrator | Wednesday 26 March 2025 15:36:25 +0000 (0:00:02.675) 0:01:13.770 ******* 2025-03-26 15:37:04.403631 | orchestrator | ok: [testbed-manager] 2025-03-26 15:37:04.404865 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:37:04.404931 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:37:04.406536 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:37:04.407310 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:37:04.407567 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:37:04.407891 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:37:04.408325 | orchestrator | 2025-03-26 15:37:04.408742 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-03-26 15:37:04.409141 | orchestrator | Wednesday 26 March 2025 15:37:04 +0000 (0:00:38.905) 0:01:52.675 ******* 2025-03-26 15:38:26.921680 | orchestrator | changed: [testbed-manager] 2025-03-26 15:38:28.728904 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:38:28.729031 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:38:28.729071 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:38:28.729086 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:38:28.729100 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:38:28.729114 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:38:28.729128 | orchestrator | 2025-03-26 15:38:28.729144 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-03-26 15:38:28.729160 | orchestrator | Wednesday 26 March 2025 15:38:26 +0000 (0:01:22.513) 0:03:15.189 ******* 2025-03-26 15:38:28.729191 | orchestrator | ok: [testbed-manager] 2025-03-26 15:38:28.729541 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:38:28.730170 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:38:28.731048 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:38:28.733902 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:38:28.734932 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:38:28.735862 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:38:28.736009 | orchestrator | 2025-03-26 15:38:28.736355 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-03-26 15:38:28.736737 | orchestrator | Wednesday 26 March 2025 15:38:28 +0000 (0:00:01.815) 0:03:17.004 ******* 2025-03-26 15:38:42.240939 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:38:42.243622 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:38:42.243656 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:38:42.243670 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:38:42.243692 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:38:42.244877 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:38:42.244901 | orchestrator | changed: [testbed-manager] 2025-03-26 15:38:42.244915 | orchestrator | 2025-03-26 15:38:42.244930 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-03-26 15:38:42.244949 | orchestrator | Wednesday 26 March 2025 15:38:42 +0000 (0:00:13.507) 0:03:30.512 ******* 2025-03-26 15:38:42.656067 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-03-26 15:38:42.657141 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-03-26 15:38:42.657540 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-03-26 15:38:42.661842 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-03-26 15:38:42.662525 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-03-26 15:38:42.663347 | orchestrator | 2025-03-26 15:38:42.663880 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-03-26 15:38:42.664700 | orchestrator | Wednesday 26 March 2025 15:38:42 +0000 (0:00:00.420) 0:03:30.932 ******* 2025-03-26 15:38:42.715253 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-03-26 15:38:42.758655 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:38:42.759382 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-03-26 15:38:42.760340 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-03-26 15:38:42.793184 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:38:42.796586 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-03-26 15:38:42.825616 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:38:42.856253 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:38:44.416280 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-03-26 15:38:44.416574 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-03-26 15:38:44.418075 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-03-26 15:38:44.420012 | orchestrator | 2025-03-26 15:38:44.420086 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-03-26 15:38:44.422012 | orchestrator | Wednesday 26 March 2025 15:38:44 +0000 (0:00:01.758) 0:03:32.691 ******* 2025-03-26 15:38:44.458144 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-03-26 15:38:44.510579 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-03-26 15:38:44.512584 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-03-26 15:38:44.518696 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-03-26 15:38:44.518744 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-03-26 15:38:44.575004 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-03-26 15:38:44.575046 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-03-26 15:38:44.578609 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-03-26 15:38:44.579085 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-03-26 15:38:44.579213 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-03-26 15:38:44.580205 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-03-26 15:38:44.581094 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-03-26 15:38:44.582299 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-03-26 15:38:44.582830 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-03-26 15:38:44.583586 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-03-26 15:38:44.584317 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-03-26 15:38:44.586138 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-03-26 15:38:44.587142 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-03-26 15:38:44.587763 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-03-26 15:38:44.588848 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-03-26 15:38:44.590965 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-03-26 15:38:44.592319 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-03-26 15:38:44.593476 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-03-26 15:38:44.637394 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:38:44.637910 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-03-26 15:38:44.638492 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-03-26 15:38:44.639223 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-03-26 15:38:44.639825 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-03-26 15:38:44.640560 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-03-26 15:38:44.641012 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-03-26 15:38:44.641894 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-03-26 15:38:44.642668 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-03-26 15:38:44.643063 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-03-26 15:38:44.680434 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:38:44.680959 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-03-26 15:38:44.681688 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-03-26 15:38:44.682459 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-03-26 15:38:44.683168 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-03-26 15:38:44.684032 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-03-26 15:38:44.714841 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-03-26 15:38:44.715532 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:38:44.717122 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-03-26 15:38:53.604148 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-03-26 15:38:53.604345 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:38:53.604437 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-03-26 15:38:53.605897 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-03-26 15:38:53.605933 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-03-26 15:38:53.605984 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-03-26 15:38:53.606004 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-03-26 15:38:53.606524 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-03-26 15:38:53.607659 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-03-26 15:38:53.609930 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-03-26 15:38:53.610564 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-03-26 15:38:53.610592 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-03-26 15:38:53.610611 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-03-26 15:38:53.611501 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-03-26 15:38:53.611890 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-03-26 15:38:53.612929 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-03-26 15:38:53.613214 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-03-26 15:38:53.614656 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-03-26 15:38:53.616275 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-03-26 15:38:53.617238 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-03-26 15:38:53.618873 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-03-26 15:38:53.620397 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-03-26 15:38:53.620669 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-03-26 15:38:53.621664 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-03-26 15:38:53.622453 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-03-26 15:38:53.623745 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-03-26 15:38:53.624335 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-03-26 15:38:53.624755 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-03-26 15:38:53.625559 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-03-26 15:38:53.626186 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-03-26 15:38:53.627158 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-03-26 15:38:53.627699 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-03-26 15:38:53.628396 | orchestrator | 2025-03-26 15:38:53.629087 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-03-26 15:38:53.629582 | orchestrator | Wednesday 26 March 2025 15:38:53 +0000 (0:00:09.189) 0:03:41.880 ******* 2025-03-26 15:38:55.192584 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-03-26 15:38:55.192911 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-03-26 15:38:55.192980 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-03-26 15:38:55.193540 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-03-26 15:38:55.194386 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-03-26 15:38:55.195056 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-03-26 15:38:55.195491 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-03-26 15:38:55.195989 | orchestrator | 2025-03-26 15:38:55.196477 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-03-26 15:38:55.197567 | orchestrator | Wednesday 26 March 2025 15:38:55 +0000 (0:00:01.585) 0:03:43.466 ******* 2025-03-26 15:38:55.274420 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-03-26 15:38:55.302162 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:38:55.364140 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-03-26 15:38:55.404290 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:38:55.404689 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-03-26 15:38:55.769353 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:38:55.769870 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-03-26 15:38:55.773670 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:38:55.773939 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-03-26 15:38:55.773973 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-03-26 15:38:55.774955 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-03-26 15:38:55.777691 | orchestrator | 2025-03-26 15:38:55.778102 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-03-26 15:38:55.778952 | orchestrator | Wednesday 26 March 2025 15:38:55 +0000 (0:00:00.578) 0:03:44.044 ******* 2025-03-26 15:38:55.834123 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-03-26 15:38:55.865650 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:38:55.969400 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-03-26 15:38:55.969566 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-03-26 15:38:57.423128 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:38:57.424910 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:38:57.426600 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-03-26 15:38:57.428135 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:38:57.429647 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-03-26 15:38:57.431526 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-03-26 15:38:57.433186 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-03-26 15:38:57.434230 | orchestrator | 2025-03-26 15:38:57.434979 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-03-26 15:38:57.435013 | orchestrator | Wednesday 26 March 2025 15:38:57 +0000 (0:00:01.653) 0:03:45.698 ******* 2025-03-26 15:38:57.503888 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:38:57.537933 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:38:57.570288 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:38:57.595833 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:38:57.757703 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:38:57.757909 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:38:57.761627 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:39:03.476087 | orchestrator | 2025-03-26 15:39:03.476244 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-03-26 15:39:03.476278 | orchestrator | Wednesday 26 March 2025 15:38:57 +0000 (0:00:00.335) 0:03:46.033 ******* 2025-03-26 15:39:03.476311 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:39:03.476497 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:39:03.477536 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:39:03.478447 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:39:03.478886 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:39:03.479609 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:39:03.479902 | orchestrator | ok: [testbed-manager] 2025-03-26 15:39:03.480525 | orchestrator | 2025-03-26 15:39:03.481018 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-03-26 15:39:03.481996 | orchestrator | Wednesday 26 March 2025 15:39:03 +0000 (0:00:05.719) 0:03:51.752 ******* 2025-03-26 15:39:03.556585 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-03-26 15:39:03.603579 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:39:03.603808 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-03-26 15:39:03.605754 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-03-26 15:39:03.641945 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:39:03.642257 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-03-26 15:39:03.691120 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:39:03.730591 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-03-26 15:39:03.730626 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:39:03.820893 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-03-26 15:39:03.822337 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:39:03.823086 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:39:03.824672 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-03-26 15:39:03.826059 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:39:03.826120 | orchestrator | 2025-03-26 15:39:03.827510 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-03-26 15:39:03.827878 | orchestrator | Wednesday 26 March 2025 15:39:03 +0000 (0:00:00.347) 0:03:52.099 ******* 2025-03-26 15:39:04.935297 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-03-26 15:39:04.936051 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-03-26 15:39:04.936096 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-03-26 15:39:04.936630 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-03-26 15:39:04.937259 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-03-26 15:39:04.937290 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-03-26 15:39:04.937626 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-03-26 15:39:04.938873 | orchestrator | 2025-03-26 15:39:04.939628 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-03-26 15:39:04.940438 | orchestrator | Wednesday 26 March 2025 15:39:04 +0000 (0:00:01.111) 0:03:53.211 ******* 2025-03-26 15:39:05.435288 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 15:39:05.435456 | orchestrator | 2025-03-26 15:39:05.436372 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-03-26 15:39:05.437373 | orchestrator | Wednesday 26 March 2025 15:39:05 +0000 (0:00:00.501) 0:03:53.712 ******* 2025-03-26 15:39:06.846254 | orchestrator | ok: [testbed-manager] 2025-03-26 15:39:06.846486 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:39:06.846507 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:39:06.846521 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:39:06.846541 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:39:06.849134 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:39:06.849282 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:39:06.849319 | orchestrator | 2025-03-26 15:39:06.852993 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-03-26 15:39:06.853087 | orchestrator | Wednesday 26 March 2025 15:39:06 +0000 (0:00:01.407) 0:03:55.120 ******* 2025-03-26 15:39:07.547098 | orchestrator | ok: [testbed-manager] 2025-03-26 15:39:07.547262 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:39:07.547755 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:39:07.549115 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:39:07.553327 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:39:07.553623 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:39:08.239468 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:39:08.239552 | orchestrator | 2025-03-26 15:39:08.239560 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-03-26 15:39:08.239567 | orchestrator | Wednesday 26 March 2025 15:39:07 +0000 (0:00:00.703) 0:03:55.823 ******* 2025-03-26 15:39:08.239584 | orchestrator | changed: [testbed-manager] 2025-03-26 15:39:08.240126 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:39:08.240137 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:39:08.240143 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:39:08.240151 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:39:08.240927 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:39:08.241731 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:39:08.242207 | orchestrator | 2025-03-26 15:39:08.242503 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-03-26 15:39:08.243241 | orchestrator | Wednesday 26 March 2025 15:39:08 +0000 (0:00:00.693) 0:03:56.516 ******* 2025-03-26 15:39:08.922106 | orchestrator | ok: [testbed-manager] 2025-03-26 15:39:08.922806 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:39:08.922830 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:39:08.923730 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:39:08.928310 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:39:08.930977 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:39:08.933564 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:39:08.934953 | orchestrator | 2025-03-26 15:39:08.935560 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-03-26 15:39:08.935752 | orchestrator | Wednesday 26 March 2025 15:39:08 +0000 (0:00:00.680) 0:03:57.197 ******* 2025-03-26 15:39:09.991959 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1743001760.4793038, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-26 15:39:09.992506 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1743001773.7203329, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-26 15:39:09.993809 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1743001772.0793872, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-26 15:39:09.994973 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1743001781.0926569, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-26 15:39:09.997583 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1743001773.9278674, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-26 15:39:09.997910 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1743001777.442562, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-26 15:39:09.997935 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1743001784.261963, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-26 15:39:09.997954 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1743001794.0134227, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-26 15:39:09.998820 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1743001712.3218088, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-26 15:39:10.000516 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1743001708.379267, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-26 15:39:10.001358 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1743001718.4539757, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-26 15:39:10.002050 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1743001712.0461283, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-26 15:39:10.003863 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1743001721.3064423, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-26 15:39:10.004999 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1743001708.478003, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-03-26 15:39:10.005549 | orchestrator | 2025-03-26 15:39:10.007136 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-03-26 15:39:10.008148 | orchestrator | Wednesday 26 March 2025 15:39:09 +0000 (0:00:01.071) 0:03:58.268 ******* 2025-03-26 15:39:11.191230 | orchestrator | changed: [testbed-manager] 2025-03-26 15:39:11.191994 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:39:11.193113 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:39:11.193867 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:39:11.194615 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:39:11.195007 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:39:11.195509 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:39:11.196023 | orchestrator | 2025-03-26 15:39:11.196644 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-03-26 15:39:11.197035 | orchestrator | Wednesday 26 March 2025 15:39:11 +0000 (0:00:01.197) 0:03:59.466 ******* 2025-03-26 15:39:12.384847 | orchestrator | changed: [testbed-manager] 2025-03-26 15:39:12.386555 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:39:12.386662 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:39:12.387151 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:39:12.387606 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:39:12.388067 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:39:12.388261 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:39:12.388616 | orchestrator | 2025-03-26 15:39:12.389743 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-03-26 15:39:12.457087 | orchestrator | Wednesday 26 March 2025 15:39:12 +0000 (0:00:01.195) 0:04:00.662 ******* 2025-03-26 15:39:12.457158 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:39:12.494005 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:39:12.536901 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:39:12.571937 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:39:12.616366 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:39:12.681803 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:39:12.682456 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:39:12.683267 | orchestrator | 2025-03-26 15:39:12.684014 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-03-26 15:39:12.684506 | orchestrator | Wednesday 26 March 2025 15:39:12 +0000 (0:00:00.297) 0:04:00.959 ******* 2025-03-26 15:39:13.502475 | orchestrator | ok: [testbed-manager] 2025-03-26 15:39:13.502608 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:39:13.503563 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:39:13.505228 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:39:13.505438 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:39:13.505542 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:39:13.506635 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:39:13.506979 | orchestrator | 2025-03-26 15:39:13.507005 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-03-26 15:39:13.507369 | orchestrator | Wednesday 26 March 2025 15:39:13 +0000 (0:00:00.816) 0:04:01.776 ******* 2025-03-26 15:39:13.950473 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 15:39:13.951469 | orchestrator | 2025-03-26 15:39:13.952308 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-03-26 15:39:13.952987 | orchestrator | Wednesday 26 March 2025 15:39:13 +0000 (0:00:00.450) 0:04:02.226 ******* 2025-03-26 15:39:22.863390 | orchestrator | ok: [testbed-manager] 2025-03-26 15:39:22.864413 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:39:22.864456 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:39:22.866007 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:39:22.867411 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:39:22.868575 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:39:22.870481 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:39:22.871515 | orchestrator | 2025-03-26 15:39:22.872382 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-03-26 15:39:22.872822 | orchestrator | Wednesday 26 March 2025 15:39:22 +0000 (0:00:08.911) 0:04:11.137 ******* 2025-03-26 15:39:24.150821 | orchestrator | ok: [testbed-manager] 2025-03-26 15:39:24.151257 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:39:24.152930 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:39:24.153641 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:39:24.154148 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:39:24.154864 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:39:24.155245 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:39:24.155747 | orchestrator | 2025-03-26 15:39:24.156494 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-03-26 15:39:24.156922 | orchestrator | Wednesday 26 March 2025 15:39:24 +0000 (0:00:01.287) 0:04:12.425 ******* 2025-03-26 15:39:25.211378 | orchestrator | ok: [testbed-manager] 2025-03-26 15:39:25.212533 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:39:25.213289 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:39:25.214323 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:39:25.215332 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:39:25.215948 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:39:25.217265 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:39:25.218623 | orchestrator | 2025-03-26 15:39:25.220137 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-03-26 15:39:25.221211 | orchestrator | Wednesday 26 March 2025 15:39:25 +0000 (0:00:01.059) 0:04:13.485 ******* 2025-03-26 15:39:25.643984 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 15:39:25.644919 | orchestrator | 2025-03-26 15:39:25.645964 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-03-26 15:39:25.646977 | orchestrator | Wednesday 26 March 2025 15:39:25 +0000 (0:00:00.435) 0:04:13.921 ******* 2025-03-26 15:39:35.038676 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:39:35.040724 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:39:35.043323 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:39:35.043358 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:39:35.043412 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:39:35.043474 | orchestrator | changed: [testbed-manager] 2025-03-26 15:39:35.043834 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:39:35.044476 | orchestrator | 2025-03-26 15:39:35.044904 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-03-26 15:39:35.045877 | orchestrator | Wednesday 26 March 2025 15:39:35 +0000 (0:00:09.391) 0:04:23.312 ******* 2025-03-26 15:39:35.690993 | orchestrator | changed: [testbed-manager] 2025-03-26 15:39:35.691636 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:39:35.691681 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:39:35.694226 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:39:35.694704 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:39:35.695822 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:39:35.696940 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:39:35.697700 | orchestrator | 2025-03-26 15:39:35.698560 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-03-26 15:39:35.699405 | orchestrator | Wednesday 26 March 2025 15:39:35 +0000 (0:00:00.652) 0:04:23.964 ******* 2025-03-26 15:39:37.027843 | orchestrator | changed: [testbed-manager] 2025-03-26 15:39:37.030209 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:39:37.030343 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:39:37.032832 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:39:37.035521 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:39:37.035556 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:39:37.037944 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:39:37.037974 | orchestrator | 2025-03-26 15:39:37.038076 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-03-26 15:39:37.040473 | orchestrator | Wednesday 26 March 2025 15:39:37 +0000 (0:00:01.340) 0:04:25.305 ******* 2025-03-26 15:39:38.329551 | orchestrator | changed: [testbed-manager] 2025-03-26 15:39:38.329692 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:39:38.335801 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:39:38.335861 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:39:38.336800 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:39:38.337551 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:39:38.339623 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:39:38.340021 | orchestrator | 2025-03-26 15:39:38.341675 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-03-26 15:39:38.342501 | orchestrator | Wednesday 26 March 2025 15:39:38 +0000 (0:00:01.300) 0:04:26.606 ******* 2025-03-26 15:39:38.454483 | orchestrator | ok: [testbed-manager] 2025-03-26 15:39:38.497288 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:39:38.558617 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:39:38.597962 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:39:38.693913 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:39:38.695828 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:39:38.701444 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:39:38.702632 | orchestrator | 2025-03-26 15:39:38.843802 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-03-26 15:39:38.843887 | orchestrator | Wednesday 26 March 2025 15:39:38 +0000 (0:00:00.364) 0:04:26.970 ******* 2025-03-26 15:39:38.843917 | orchestrator | ok: [testbed-manager] 2025-03-26 15:39:38.885357 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:39:38.924481 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:39:38.966096 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:39:39.061704 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:39:39.062604 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:39:39.062637 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:39:39.063362 | orchestrator | 2025-03-26 15:39:39.066708 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-03-26 15:39:39.217017 | orchestrator | Wednesday 26 March 2025 15:39:39 +0000 (0:00:00.362) 0:04:27.333 ******* 2025-03-26 15:39:39.217076 | orchestrator | ok: [testbed-manager] 2025-03-26 15:39:39.256419 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:39:39.297139 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:39:39.338068 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:39:39.428492 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:39:39.429813 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:39:39.433591 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:39:39.434227 | orchestrator | 2025-03-26 15:39:39.434932 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-03-26 15:39:39.436401 | orchestrator | Wednesday 26 March 2025 15:39:39 +0000 (0:00:00.371) 0:04:27.705 ******* 2025-03-26 15:39:44.603153 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:39:44.603812 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:39:44.603867 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:39:44.604209 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:39:44.604462 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:39:44.604870 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:39:44.605613 | orchestrator | ok: [testbed-manager] 2025-03-26 15:39:44.605715 | orchestrator | 2025-03-26 15:39:44.607823 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-03-26 15:39:44.607894 | orchestrator | Wednesday 26 March 2025 15:39:44 +0000 (0:00:05.166) 0:04:32.871 ******* 2025-03-26 15:39:45.063412 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 15:39:45.064721 | orchestrator | 2025-03-26 15:39:45.065645 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-03-26 15:39:45.066241 | orchestrator | Wednesday 26 March 2025 15:39:45 +0000 (0:00:00.469) 0:04:33.341 ******* 2025-03-26 15:39:45.145039 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-03-26 15:39:45.145985 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-03-26 15:39:45.146178 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-03-26 15:39:45.187665 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:39:45.189088 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-03-26 15:39:45.190608 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-03-26 15:39:45.252667 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:39:45.253190 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-03-26 15:39:45.256815 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-03-26 15:39:45.297542 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-03-26 15:39:45.297585 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:39:45.299141 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-03-26 15:39:45.301901 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-03-26 15:39:45.342088 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:39:45.343704 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-03-26 15:39:45.433626 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:39:45.435554 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-03-26 15:39:45.435611 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:39:45.437454 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-03-26 15:39:45.438214 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-03-26 15:39:45.439992 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:39:45.440612 | orchestrator | 2025-03-26 15:39:45.441719 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-03-26 15:39:45.442143 | orchestrator | Wednesday 26 March 2025 15:39:45 +0000 (0:00:00.369) 0:04:33.710 ******* 2025-03-26 15:39:45.876135 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 15:39:45.877288 | orchestrator | 2025-03-26 15:39:45.880684 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-03-26 15:39:45.943213 | orchestrator | Wednesday 26 March 2025 15:39:45 +0000 (0:00:00.442) 0:04:34.152 ******* 2025-03-26 15:39:45.943326 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-03-26 15:39:46.023050 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:39:46.023872 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-03-26 15:39:46.066499 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:39:46.131065 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-03-26 15:39:46.131137 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-03-26 15:39:46.132395 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:39:46.132889 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-03-26 15:39:46.178274 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:39:46.179019 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-03-26 15:39:46.258836 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:39:46.259901 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:39:46.260567 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-03-26 15:39:46.262187 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:39:46.263577 | orchestrator | 2025-03-26 15:39:46.264619 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-03-26 15:39:46.265573 | orchestrator | Wednesday 26 March 2025 15:39:46 +0000 (0:00:00.383) 0:04:34.536 ******* 2025-03-26 15:39:46.783133 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 15:39:46.783285 | orchestrator | 2025-03-26 15:39:46.783630 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-03-26 15:39:46.785060 | orchestrator | Wednesday 26 March 2025 15:39:46 +0000 (0:00:00.523) 0:04:35.059 ******* 2025-03-26 15:40:21.500410 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:40:21.501168 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:40:21.501204 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:40:21.501219 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:40:21.501235 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:40:21.501249 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:40:21.501264 | orchestrator | changed: [testbed-manager] 2025-03-26 15:40:21.501278 | orchestrator | 2025-03-26 15:40:21.501293 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-03-26 15:40:21.501317 | orchestrator | Wednesday 26 March 2025 15:40:21 +0000 (0:00:34.709) 0:05:09.769 ******* 2025-03-26 15:40:29.980191 | orchestrator | changed: [testbed-manager] 2025-03-26 15:40:29.982861 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:40:29.982910 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:40:29.984390 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:40:29.986111 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:40:29.986417 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:40:29.987184 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:40:29.987571 | orchestrator | 2025-03-26 15:40:29.988549 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-03-26 15:40:38.432342 | orchestrator | Wednesday 26 March 2025 15:40:29 +0000 (0:00:08.483) 0:05:18.253 ******* 2025-03-26 15:40:38.432516 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:40:38.432602 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:40:38.432987 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:40:38.433380 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:40:38.434716 | orchestrator | changed: [testbed-manager] 2025-03-26 15:40:38.435943 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:40:38.436449 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:40:38.437264 | orchestrator | 2025-03-26 15:40:38.437962 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-03-26 15:40:38.438300 | orchestrator | Wednesday 26 March 2025 15:40:38 +0000 (0:00:08.453) 0:05:26.707 ******* 2025-03-26 15:40:40.518015 | orchestrator | ok: [testbed-manager] 2025-03-26 15:40:40.518483 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:40:40.518527 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:40:40.520286 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:40:40.521369 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:40:40.521853 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:40:40.522749 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:40:40.523113 | orchestrator | 2025-03-26 15:40:40.523696 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-03-26 15:40:40.524248 | orchestrator | Wednesday 26 March 2025 15:40:40 +0000 (0:00:02.084) 0:05:28.792 ******* 2025-03-26 15:40:47.164714 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:40:47.165000 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:40:47.165467 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:40:47.168920 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:40:47.169836 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:40:47.170390 | orchestrator | changed: [testbed-manager] 2025-03-26 15:40:47.171565 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:40:47.172193 | orchestrator | 2025-03-26 15:40:47.172767 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-03-26 15:40:47.173670 | orchestrator | Wednesday 26 March 2025 15:40:47 +0000 (0:00:06.647) 0:05:35.439 ******* 2025-03-26 15:40:47.664993 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 15:40:47.665785 | orchestrator | 2025-03-26 15:40:47.665839 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-03-26 15:40:47.666837 | orchestrator | Wednesday 26 March 2025 15:40:47 +0000 (0:00:00.501) 0:05:35.941 ******* 2025-03-26 15:40:48.478276 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:40:48.478487 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:40:48.479383 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:40:48.479585 | orchestrator | changed: [testbed-manager] 2025-03-26 15:40:48.480919 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:40:48.482288 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:40:48.482320 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:40:48.482839 | orchestrator | 2025-03-26 15:40:48.482863 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-03-26 15:40:48.482883 | orchestrator | Wednesday 26 March 2025 15:40:48 +0000 (0:00:00.812) 0:05:36.753 ******* 2025-03-26 15:40:50.389286 | orchestrator | ok: [testbed-manager] 2025-03-26 15:40:50.390316 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:40:50.393633 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:40:50.394910 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:40:50.396547 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:40:50.397344 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:40:50.398000 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:40:50.398662 | orchestrator | 2025-03-26 15:40:50.399412 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-03-26 15:40:50.400186 | orchestrator | Wednesday 26 March 2025 15:40:50 +0000 (0:00:01.910) 0:05:38.664 ******* 2025-03-26 15:40:51.233265 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:40:51.233580 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:40:51.234360 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:40:51.234786 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:40:51.235379 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:40:51.235699 | orchestrator | changed: [testbed-manager] 2025-03-26 15:40:51.236372 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:40:51.236652 | orchestrator | 2025-03-26 15:40:51.237389 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-03-26 15:40:51.239118 | orchestrator | Wednesday 26 March 2025 15:40:51 +0000 (0:00:00.843) 0:05:39.508 ******* 2025-03-26 15:40:51.316096 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:40:51.350876 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:40:51.388546 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:40:51.422681 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:40:51.456948 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:40:51.554673 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:40:51.555886 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:40:51.556795 | orchestrator | 2025-03-26 15:40:51.557553 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-03-26 15:40:51.558716 | orchestrator | Wednesday 26 March 2025 15:40:51 +0000 (0:00:00.324) 0:05:39.832 ******* 2025-03-26 15:40:51.630916 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:40:51.685537 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:40:51.738112 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:40:51.775813 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:40:51.840170 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:40:52.061921 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:40:52.062788 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:40:52.063958 | orchestrator | 2025-03-26 15:40:52.065114 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-03-26 15:40:52.066278 | orchestrator | Wednesday 26 March 2025 15:40:52 +0000 (0:00:00.504) 0:05:40.337 ******* 2025-03-26 15:40:52.150620 | orchestrator | ok: [testbed-manager] 2025-03-26 15:40:52.241502 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:40:52.279777 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:40:52.336600 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:40:52.429834 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:40:52.429923 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:40:52.430748 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:40:52.431263 | orchestrator | 2025-03-26 15:40:52.431586 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-03-26 15:40:52.432344 | orchestrator | Wednesday 26 March 2025 15:40:52 +0000 (0:00:00.368) 0:05:40.706 ******* 2025-03-26 15:40:52.535707 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:40:52.575694 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:40:52.609984 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:40:52.683037 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:40:52.755112 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:40:52.755497 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:40:52.757143 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:40:52.757862 | orchestrator | 2025-03-26 15:40:52.758627 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-03-26 15:40:52.758897 | orchestrator | Wednesday 26 March 2025 15:40:52 +0000 (0:00:00.324) 0:05:41.031 ******* 2025-03-26 15:40:52.882145 | orchestrator | ok: [testbed-manager] 2025-03-26 15:40:52.921415 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:40:52.963617 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:40:53.005287 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:40:53.084343 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:40:53.085729 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:40:53.087193 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:40:53.088696 | orchestrator | 2025-03-26 15:40:53.089018 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-03-26 15:40:53.090496 | orchestrator | Wednesday 26 March 2025 15:40:53 +0000 (0:00:00.331) 0:05:41.362 ******* 2025-03-26 15:40:53.157067 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:40:53.261415 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:40:53.302434 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:40:53.343538 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:40:53.422141 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:40:53.424721 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:40:53.429833 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:40:53.429885 | orchestrator | 2025-03-26 15:40:53.430152 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-03-26 15:40:53.430184 | orchestrator | Wednesday 26 March 2025 15:40:53 +0000 (0:00:00.334) 0:05:41.697 ******* 2025-03-26 15:40:53.493926 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:40:53.527337 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:40:53.570290 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:40:53.605185 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:40:53.637262 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:40:53.728916 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:40:53.729875 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:40:53.730486 | orchestrator | 2025-03-26 15:40:53.732015 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-03-26 15:40:53.732684 | orchestrator | Wednesday 26 March 2025 15:40:53 +0000 (0:00:00.308) 0:05:42.006 ******* 2025-03-26 15:40:54.429519 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 15:40:54.429671 | orchestrator | 2025-03-26 15:40:54.429966 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-03-26 15:40:54.430755 | orchestrator | Wednesday 26 March 2025 15:40:54 +0000 (0:00:00.699) 0:05:42.705 ******* 2025-03-26 15:40:55.304327 | orchestrator | ok: [testbed-manager] 2025-03-26 15:40:55.304511 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:40:55.306795 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:40:55.308275 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:40:55.309343 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:40:55.311320 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:40:55.311431 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:40:55.312484 | orchestrator | 2025-03-26 15:40:55.313514 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-03-26 15:40:55.314262 | orchestrator | Wednesday 26 March 2025 15:40:55 +0000 (0:00:00.874) 0:05:43.579 ******* 2025-03-26 15:40:58.382089 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:40:58.382456 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:40:58.383051 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:40:58.383869 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:40:58.384734 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:40:58.387656 | orchestrator | ok: [testbed-manager] 2025-03-26 15:40:58.388192 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:40:58.388215 | orchestrator | 2025-03-26 15:40:58.388234 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-03-26 15:40:58.389481 | orchestrator | Wednesday 26 March 2025 15:40:58 +0000 (0:00:03.079) 0:05:46.659 ******* 2025-03-26 15:40:58.474577 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-03-26 15:40:58.475598 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-03-26 15:40:58.554087 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-03-26 15:40:58.555309 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-03-26 15:40:58.558959 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-03-26 15:40:58.559555 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-03-26 15:40:58.629062 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:40:58.744023 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-03-26 15:40:58.744121 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:40:58.745075 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-03-26 15:40:58.745814 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-03-26 15:40:58.749665 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-03-26 15:40:58.820009 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-03-26 15:40:58.820077 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-03-26 15:40:58.820101 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:40:58.820809 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-03-26 15:40:58.821181 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-03-26 15:40:58.822060 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-03-26 15:40:58.926657 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:40:58.927749 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-03-26 15:40:58.927773 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-03-26 15:40:58.927791 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-03-26 15:40:59.073035 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:40:59.074744 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:40:59.076081 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-03-26 15:40:59.076836 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-03-26 15:40:59.078429 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-03-26 15:40:59.079731 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:40:59.081012 | orchestrator | 2025-03-26 15:40:59.081741 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-03-26 15:40:59.082302 | orchestrator | Wednesday 26 March 2025 15:40:59 +0000 (0:00:00.689) 0:05:47.348 ******* 2025-03-26 15:41:06.152742 | orchestrator | ok: [testbed-manager] 2025-03-26 15:41:06.153212 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:41:06.155953 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:41:06.158491 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:41:06.158526 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:41:06.158738 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:41:06.158768 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:41:06.160140 | orchestrator | 2025-03-26 15:41:06.161309 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-03-26 15:41:06.161695 | orchestrator | Wednesday 26 March 2025 15:41:06 +0000 (0:00:07.078) 0:05:54.427 ******* 2025-03-26 15:41:07.350831 | orchestrator | ok: [testbed-manager] 2025-03-26 15:41:07.351531 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:41:07.352448 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:41:07.352861 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:41:07.353445 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:41:07.354663 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:41:07.356008 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:41:07.358526 | orchestrator | 2025-03-26 15:41:15.515602 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-03-26 15:41:15.515818 | orchestrator | Wednesday 26 March 2025 15:41:07 +0000 (0:00:01.200) 0:05:55.627 ******* 2025-03-26 15:41:15.515860 | orchestrator | ok: [testbed-manager] 2025-03-26 15:41:15.515957 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:41:15.516844 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:41:15.516885 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:41:15.517117 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:41:15.517714 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:41:15.518438 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:41:15.518637 | orchestrator | 2025-03-26 15:41:15.519479 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-03-26 15:41:15.521120 | orchestrator | Wednesday 26 March 2025 15:41:15 +0000 (0:00:08.162) 0:06:03.790 ******* 2025-03-26 15:41:18.539499 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:41:18.539716 | orchestrator | changed: [testbed-manager] 2025-03-26 15:41:18.540661 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:41:18.541451 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:41:18.541888 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:41:18.542613 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:41:18.543670 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:41:18.544094 | orchestrator | 2025-03-26 15:41:18.544947 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-03-26 15:41:18.545727 | orchestrator | Wednesday 26 March 2025 15:41:18 +0000 (0:00:03.025) 0:06:06.816 ******* 2025-03-26 15:41:20.110143 | orchestrator | ok: [testbed-manager] 2025-03-26 15:41:20.113733 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:41:20.115468 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:41:20.115500 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:41:20.115522 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:41:20.116379 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:41:20.116849 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:41:20.117784 | orchestrator | 2025-03-26 15:41:20.118612 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-03-26 15:41:20.118880 | orchestrator | Wednesday 26 March 2025 15:41:20 +0000 (0:00:01.568) 0:06:08.384 ******* 2025-03-26 15:41:21.602399 | orchestrator | ok: [testbed-manager] 2025-03-26 15:41:21.602614 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:41:21.603264 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:41:21.604106 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:41:21.604151 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:41:21.604337 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:41:21.604752 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:41:21.605106 | orchestrator | 2025-03-26 15:41:21.605516 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-03-26 15:41:21.605637 | orchestrator | Wednesday 26 March 2025 15:41:21 +0000 (0:00:01.492) 0:06:09.877 ******* 2025-03-26 15:41:21.823509 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:41:21.914767 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:41:22.011453 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:41:22.082412 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:41:22.279283 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:41:22.279413 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:41:22.279864 | orchestrator | changed: [testbed-manager] 2025-03-26 15:41:22.280846 | orchestrator | 2025-03-26 15:41:22.281160 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-03-26 15:41:22.281802 | orchestrator | Wednesday 26 March 2025 15:41:22 +0000 (0:00:00.676) 0:06:10.553 ******* 2025-03-26 15:41:32.625388 | orchestrator | ok: [testbed-manager] 2025-03-26 15:41:32.627711 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:41:32.627756 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:41:32.627780 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:41:32.628064 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:41:32.628089 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:41:32.628109 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:41:32.631383 | orchestrator | 2025-03-26 15:41:33.184124 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-03-26 15:41:33.184252 | orchestrator | Wednesday 26 March 2025 15:41:32 +0000 (0:00:10.343) 0:06:20.897 ******* 2025-03-26 15:41:33.184290 | orchestrator | changed: [testbed-manager] 2025-03-26 15:41:33.749663 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:41:33.750850 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:41:33.753998 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:41:33.754856 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:41:33.755208 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:41:33.755993 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:41:33.756928 | orchestrator | 2025-03-26 15:41:33.757643 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-03-26 15:41:33.758478 | orchestrator | Wednesday 26 March 2025 15:41:33 +0000 (0:00:01.124) 0:06:22.022 ******* 2025-03-26 15:41:47.395970 | orchestrator | ok: [testbed-manager] 2025-03-26 15:41:47.396141 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:41:47.396164 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:41:47.396184 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:41:47.397186 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:41:47.398219 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:41:47.400067 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:41:47.400684 | orchestrator | 2025-03-26 15:41:47.402143 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-03-26 15:42:00.962767 | orchestrator | Wednesday 26 March 2025 15:41:47 +0000 (0:00:13.643) 0:06:35.666 ******* 2025-03-26 15:42:00.962950 | orchestrator | ok: [testbed-manager] 2025-03-26 15:42:00.963423 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:42:00.964230 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:42:00.964567 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:42:00.965832 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:42:00.968533 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:42:00.970686 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:42:00.970720 | orchestrator | 2025-03-26 15:42:00.971034 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-03-26 15:42:00.972934 | orchestrator | Wednesday 26 March 2025 15:42:00 +0000 (0:00:13.569) 0:06:49.235 ******* 2025-03-26 15:42:01.421962 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-03-26 15:42:02.256729 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-03-26 15:42:02.259365 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-03-26 15:42:02.259675 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-03-26 15:42:02.260399 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-03-26 15:42:02.262111 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-03-26 15:42:02.262957 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-03-26 15:42:02.265289 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-03-26 15:42:02.266223 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-03-26 15:42:02.266732 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-03-26 15:42:02.267631 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-03-26 15:42:02.268340 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-03-26 15:42:02.269434 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-03-26 15:42:02.270013 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-03-26 15:42:02.270961 | orchestrator | 2025-03-26 15:42:02.271770 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-03-26 15:42:02.272233 | orchestrator | Wednesday 26 March 2025 15:42:02 +0000 (0:00:01.294) 0:06:50.529 ******* 2025-03-26 15:42:02.410704 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:42:02.478851 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:42:02.545202 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:42:02.616459 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:42:02.681908 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:42:02.799745 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:42:06.767822 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:42:06.767950 | orchestrator | 2025-03-26 15:42:06.767970 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-03-26 15:42:06.767985 | orchestrator | Wednesday 26 March 2025 15:42:02 +0000 (0:00:00.540) 0:06:51.070 ******* 2025-03-26 15:42:06.768015 | orchestrator | ok: [testbed-manager] 2025-03-26 15:42:06.771292 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:42:06.771933 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:42:06.771964 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:42:06.773302 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:42:06.774539 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:42:06.774985 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:42:06.775611 | orchestrator | 2025-03-26 15:42:06.776552 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-03-26 15:42:06.777109 | orchestrator | Wednesday 26 March 2025 15:42:06 +0000 (0:00:03.973) 0:06:55.043 ******* 2025-03-26 15:42:06.900257 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:42:06.973782 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:42:07.240070 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:42:07.317305 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:42:07.399537 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:42:07.508945 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:42:07.510783 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:42:07.510822 | orchestrator | 2025-03-26 15:42:07.511103 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-03-26 15:42:07.511670 | orchestrator | Wednesday 26 March 2025 15:42:07 +0000 (0:00:00.741) 0:06:55.785 ******* 2025-03-26 15:42:07.588124 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-03-26 15:42:07.591450 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-03-26 15:42:07.676592 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:42:07.680716 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-03-26 15:42:07.795295 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-03-26 15:42:07.795374 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:42:07.795696 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-03-26 15:42:07.796862 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-03-26 15:42:07.873374 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:42:07.874393 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-03-26 15:42:07.877321 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-03-26 15:42:07.957154 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:42:07.957636 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-03-26 15:42:07.959834 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-03-26 15:42:08.042521 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:42:08.043539 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-03-26 15:42:08.044152 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-03-26 15:42:08.178855 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:42:08.182725 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-03-26 15:42:08.183076 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-03-26 15:42:08.183102 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:42:08.183121 | orchestrator | 2025-03-26 15:42:08.184431 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-03-26 15:42:08.184805 | orchestrator | Wednesday 26 March 2025 15:42:08 +0000 (0:00:00.665) 0:06:56.450 ******* 2025-03-26 15:42:08.338144 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:42:08.420109 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:42:08.493339 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:42:08.579336 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:42:08.645873 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:42:08.757551 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:42:08.759384 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:42:08.762601 | orchestrator | 2025-03-26 15:42:08.764029 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-03-26 15:42:08.764060 | orchestrator | Wednesday 26 March 2025 15:42:08 +0000 (0:00:00.582) 0:06:57.033 ******* 2025-03-26 15:42:08.899685 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:42:08.966269 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:42:09.042916 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:42:09.111455 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:42:09.181730 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:42:09.303547 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:42:09.305402 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:42:09.308589 | orchestrator | 2025-03-26 15:42:09.311387 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-03-26 15:42:09.452997 | orchestrator | Wednesday 26 March 2025 15:42:09 +0000 (0:00:00.545) 0:06:57.579 ******* 2025-03-26 15:42:09.453070 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:42:09.542451 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:42:09.612882 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:42:09.679181 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:42:09.754149 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:42:09.884775 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:42:09.885610 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:42:09.886811 | orchestrator | 2025-03-26 15:42:09.887451 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-03-26 15:42:09.889240 | orchestrator | Wednesday 26 March 2025 15:42:09 +0000 (0:00:00.578) 0:06:58.157 ******* 2025-03-26 15:42:16.651039 | orchestrator | ok: [testbed-manager] 2025-03-26 15:42:16.651829 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:42:16.655185 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:42:16.656800 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:42:16.657579 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:42:16.658567 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:42:16.659353 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:42:16.659861 | orchestrator | 2025-03-26 15:42:16.660681 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-03-26 15:42:16.662324 | orchestrator | Wednesday 26 March 2025 15:42:16 +0000 (0:00:06.767) 0:07:04.925 ******* 2025-03-26 15:42:17.575914 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 15:42:17.576630 | orchestrator | 2025-03-26 15:42:17.578212 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-03-26 15:42:17.578456 | orchestrator | Wednesday 26 March 2025 15:42:17 +0000 (0:00:00.928) 0:07:05.854 ******* 2025-03-26 15:42:18.040289 | orchestrator | ok: [testbed-manager] 2025-03-26 15:42:18.470847 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:42:18.471943 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:42:18.474382 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:42:18.475194 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:42:18.475453 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:42:18.475513 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:42:18.475528 | orchestrator | 2025-03-26 15:42:18.475548 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-03-26 15:42:18.475901 | orchestrator | Wednesday 26 March 2025 15:42:18 +0000 (0:00:00.894) 0:07:06.749 ******* 2025-03-26 15:42:19.548565 | orchestrator | ok: [testbed-manager] 2025-03-26 15:42:19.549171 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:42:19.550565 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:42:19.551593 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:42:19.552704 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:42:19.554085 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:42:19.555217 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:42:19.556973 | orchestrator | 2025-03-26 15:42:19.558422 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-03-26 15:42:19.559297 | orchestrator | Wednesday 26 March 2025 15:42:19 +0000 (0:00:01.075) 0:07:07.824 ******* 2025-03-26 15:42:21.032059 | orchestrator | ok: [testbed-manager] 2025-03-26 15:42:21.032241 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:42:21.033214 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:42:21.038241 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:42:21.039221 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:42:21.039795 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:42:21.041090 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:42:21.041633 | orchestrator | 2025-03-26 15:42:21.041673 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-03-26 15:42:21.042538 | orchestrator | Wednesday 26 March 2025 15:42:21 +0000 (0:00:01.480) 0:07:09.305 ******* 2025-03-26 15:42:21.194720 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:42:22.435828 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:42:22.436251 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:42:22.437257 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:42:22.438889 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:42:22.440091 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:42:22.440122 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:42:22.440178 | orchestrator | 2025-03-26 15:42:22.443485 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-03-26 15:42:23.813239 | orchestrator | Wednesday 26 March 2025 15:42:22 +0000 (0:00:01.407) 0:07:10.713 ******* 2025-03-26 15:42:23.813374 | orchestrator | ok: [testbed-manager] 2025-03-26 15:42:23.815772 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:42:23.815815 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:42:23.817096 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:42:23.817126 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:42:23.817844 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:42:23.819385 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:42:23.820924 | orchestrator | 2025-03-26 15:42:23.821614 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-03-26 15:42:23.822538 | orchestrator | Wednesday 26 March 2025 15:42:23 +0000 (0:00:01.373) 0:07:12.086 ******* 2025-03-26 15:42:25.217028 | orchestrator | changed: [testbed-manager] 2025-03-26 15:42:25.217192 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:42:25.217877 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:42:25.219365 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:42:25.222388 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:42:25.223415 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:42:25.223443 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:42:25.223500 | orchestrator | 2025-03-26 15:42:25.224812 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-03-26 15:42:25.226075 | orchestrator | Wednesday 26 March 2025 15:42:25 +0000 (0:00:01.402) 0:07:13.488 ******* 2025-03-26 15:42:26.410422 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 15:42:26.410714 | orchestrator | 2025-03-26 15:42:26.411456 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-03-26 15:42:26.412072 | orchestrator | Wednesday 26 March 2025 15:42:26 +0000 (0:00:01.196) 0:07:14.684 ******* 2025-03-26 15:42:27.906651 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:42:27.909166 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:42:27.909614 | orchestrator | ok: [testbed-manager] 2025-03-26 15:42:27.910146 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:42:27.911105 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:42:27.912304 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:42:27.912662 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:42:27.913861 | orchestrator | 2025-03-26 15:42:27.914825 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-03-26 15:42:27.915289 | orchestrator | Wednesday 26 March 2025 15:42:27 +0000 (0:00:01.494) 0:07:16.179 ******* 2025-03-26 15:42:29.094793 | orchestrator | ok: [testbed-manager] 2025-03-26 15:42:29.095534 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:42:29.096045 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:42:29.096602 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:42:29.097311 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:42:29.097869 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:42:29.097977 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:42:29.098537 | orchestrator | 2025-03-26 15:42:29.099122 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-03-26 15:42:29.099454 | orchestrator | Wednesday 26 March 2025 15:42:29 +0000 (0:00:01.191) 0:07:17.371 ******* 2025-03-26 15:42:30.353340 | orchestrator | ok: [testbed-manager] 2025-03-26 15:42:30.355266 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:42:30.356522 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:42:30.357902 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:42:30.358489 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:42:30.359482 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:42:30.361587 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:42:30.361777 | orchestrator | 2025-03-26 15:42:30.362304 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-03-26 15:42:30.362691 | orchestrator | Wednesday 26 March 2025 15:42:30 +0000 (0:00:01.253) 0:07:18.625 ******* 2025-03-26 15:42:31.815319 | orchestrator | ok: [testbed-manager] 2025-03-26 15:42:31.816322 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:42:31.817568 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:42:31.819421 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:42:31.820171 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:42:31.821509 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:42:31.822882 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:42:31.823344 | orchestrator | 2025-03-26 15:42:31.824261 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-03-26 15:42:31.825390 | orchestrator | Wednesday 26 March 2025 15:42:31 +0000 (0:00:01.465) 0:07:20.090 ******* 2025-03-26 15:42:33.074912 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 15:42:33.076291 | orchestrator | 2025-03-26 15:42:33.076347 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-03-26 15:42:33.077398 | orchestrator | Wednesday 26 March 2025 15:42:32 +0000 (0:00:00.958) 0:07:21.049 ******* 2025-03-26 15:42:33.078527 | orchestrator | 2025-03-26 15:42:33.079579 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-03-26 15:42:33.080291 | orchestrator | Wednesday 26 March 2025 15:42:32 +0000 (0:00:00.044) 0:07:21.093 ******* 2025-03-26 15:42:33.080779 | orchestrator | 2025-03-26 15:42:33.082579 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-03-26 15:42:33.083455 | orchestrator | Wednesday 26 March 2025 15:42:32 +0000 (0:00:00.038) 0:07:21.132 ******* 2025-03-26 15:42:33.084323 | orchestrator | 2025-03-26 15:42:33.085407 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-03-26 15:42:33.086583 | orchestrator | Wednesday 26 March 2025 15:42:32 +0000 (0:00:00.045) 0:07:21.178 ******* 2025-03-26 15:42:33.087192 | orchestrator | 2025-03-26 15:42:33.088569 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-03-26 15:42:33.088900 | orchestrator | Wednesday 26 March 2025 15:42:32 +0000 (0:00:00.040) 0:07:21.218 ******* 2025-03-26 15:42:33.089695 | orchestrator | 2025-03-26 15:42:33.090314 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-03-26 15:42:33.090771 | orchestrator | Wednesday 26 March 2025 15:42:32 +0000 (0:00:00.039) 0:07:21.258 ******* 2025-03-26 15:42:33.091452 | orchestrator | 2025-03-26 15:42:33.091870 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-03-26 15:42:33.092378 | orchestrator | Wednesday 26 March 2025 15:42:33 +0000 (0:00:00.047) 0:07:21.306 ******* 2025-03-26 15:42:33.092566 | orchestrator | 2025-03-26 15:42:33.092974 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-03-26 15:42:33.093309 | orchestrator | Wednesday 26 March 2025 15:42:33 +0000 (0:00:00.041) 0:07:21.347 ******* 2025-03-26 15:42:34.293187 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:42:34.293397 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:42:34.294264 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:42:34.295360 | orchestrator | 2025-03-26 15:42:34.296125 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-03-26 15:42:34.297168 | orchestrator | Wednesday 26 March 2025 15:42:34 +0000 (0:00:01.218) 0:07:22.565 ******* 2025-03-26 15:42:35.896725 | orchestrator | changed: [testbed-manager] 2025-03-26 15:42:35.897820 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:42:35.899713 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:42:35.900542 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:42:35.900573 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:42:35.901208 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:42:35.901391 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:42:35.901887 | orchestrator | 2025-03-26 15:42:35.902520 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-03-26 15:42:35.903129 | orchestrator | Wednesday 26 March 2025 15:42:35 +0000 (0:00:01.605) 0:07:24.171 ******* 2025-03-26 15:42:37.185911 | orchestrator | changed: [testbed-manager] 2025-03-26 15:42:37.187329 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:42:37.187621 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:42:37.188172 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:42:37.188350 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:42:37.188718 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:42:37.191653 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:42:37.332673 | orchestrator | 2025-03-26 15:42:37.332753 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-03-26 15:42:37.332770 | orchestrator | Wednesday 26 March 2025 15:42:37 +0000 (0:00:01.288) 0:07:25.459 ******* 2025-03-26 15:42:37.332796 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:42:39.296507 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:42:39.296722 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:42:39.297988 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:42:39.298623 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:42:39.300227 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:42:39.300484 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:42:39.301115 | orchestrator | 2025-03-26 15:42:39.301445 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-03-26 15:42:39.302231 | orchestrator | Wednesday 26 March 2025 15:42:39 +0000 (0:00:02.109) 0:07:27.569 ******* 2025-03-26 15:42:39.425750 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:42:40.540896 | orchestrator | 2025-03-26 15:42:40.540994 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-03-26 15:42:40.541012 | orchestrator | Wednesday 26 March 2025 15:42:39 +0000 (0:00:00.129) 0:07:27.698 ******* 2025-03-26 15:42:40.541043 | orchestrator | ok: [testbed-manager] 2025-03-26 15:42:40.541294 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:42:40.541796 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:42:40.545400 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:42:40.679695 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:42:40.679757 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:42:40.679773 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:42:40.679787 | orchestrator | 2025-03-26 15:42:40.679803 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-03-26 15:42:40.679818 | orchestrator | Wednesday 26 March 2025 15:42:40 +0000 (0:00:01.116) 0:07:28.815 ******* 2025-03-26 15:42:40.679843 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:42:40.748322 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:42:41.084361 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:42:41.163388 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:42:41.298113 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:42:41.298383 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:42:41.299213 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:42:41.300252 | orchestrator | 2025-03-26 15:42:41.300576 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-03-26 15:42:41.303971 | orchestrator | Wednesday 26 March 2025 15:42:41 +0000 (0:00:00.758) 0:07:29.574 ******* 2025-03-26 15:42:42.266220 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 15:42:42.266855 | orchestrator | 2025-03-26 15:42:42.267715 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-03-26 15:42:42.271169 | orchestrator | Wednesday 26 March 2025 15:42:42 +0000 (0:00:00.968) 0:07:30.542 ******* 2025-03-26 15:42:42.805645 | orchestrator | ok: [testbed-manager] 2025-03-26 15:42:43.287443 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:42:43.288051 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:42:43.289582 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:42:43.290349 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:42:43.293743 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:42:43.294798 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:42:43.296100 | orchestrator | 2025-03-26 15:42:43.297169 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-03-26 15:42:43.298300 | orchestrator | Wednesday 26 March 2025 15:42:43 +0000 (0:00:01.019) 0:07:31.561 ******* 2025-03-26 15:42:46.104764 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-03-26 15:42:46.105212 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-03-26 15:42:46.105891 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-03-26 15:42:46.106910 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-03-26 15:42:46.108234 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-03-26 15:42:46.108677 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-03-26 15:42:46.109542 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-03-26 15:42:46.109760 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-03-26 15:42:46.110562 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-03-26 15:42:46.111408 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-03-26 15:42:46.111782 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-03-26 15:42:46.112587 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-03-26 15:42:46.113130 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-03-26 15:42:46.114964 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-03-26 15:42:46.115873 | orchestrator | 2025-03-26 15:42:46.116329 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-03-26 15:42:46.116531 | orchestrator | Wednesday 26 March 2025 15:42:46 +0000 (0:00:02.818) 0:07:34.380 ******* 2025-03-26 15:42:46.251923 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:42:46.328510 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:42:46.413848 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:42:46.496744 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:42:46.563760 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:42:46.673223 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:42:46.674122 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:42:46.675252 | orchestrator | 2025-03-26 15:42:46.678606 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-03-26 15:42:47.514862 | orchestrator | Wednesday 26 March 2025 15:42:46 +0000 (0:00:00.567) 0:07:34.947 ******* 2025-03-26 15:42:47.514988 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 15:42:47.517604 | orchestrator | 2025-03-26 15:42:47.517859 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-03-26 15:42:47.518885 | orchestrator | Wednesday 26 March 2025 15:42:47 +0000 (0:00:00.841) 0:07:35.789 ******* 2025-03-26 15:42:47.944614 | orchestrator | ok: [testbed-manager] 2025-03-26 15:42:48.382826 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:42:48.383299 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:42:48.383336 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:42:48.384408 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:42:48.384919 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:42:48.386065 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:42:48.395337 | orchestrator | 2025-03-26 15:42:48.790174 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-03-26 15:42:48.790254 | orchestrator | Wednesday 26 March 2025 15:42:48 +0000 (0:00:00.868) 0:07:36.657 ******* 2025-03-26 15:42:48.790282 | orchestrator | ok: [testbed-manager] 2025-03-26 15:42:49.509995 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:42:49.510627 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:42:49.511398 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:42:49.512784 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:42:49.514481 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:42:49.515580 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:42:49.515920 | orchestrator | 2025-03-26 15:42:49.516549 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-03-26 15:42:49.517479 | orchestrator | Wednesday 26 March 2025 15:42:49 +0000 (0:00:01.128) 0:07:37.785 ******* 2025-03-26 15:42:49.643663 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:42:49.729249 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:42:49.799101 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:42:49.876896 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:42:49.949959 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:42:50.051641 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:42:50.055833 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:42:50.056120 | orchestrator | 2025-03-26 15:42:50.056151 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-03-26 15:42:50.056172 | orchestrator | Wednesday 26 March 2025 15:42:50 +0000 (0:00:00.540) 0:07:38.326 ******* 2025-03-26 15:42:51.606977 | orchestrator | ok: [testbed-manager] 2025-03-26 15:42:51.607214 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:42:51.607246 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:42:51.608181 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:42:51.608394 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:42:51.609279 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:42:51.609890 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:42:51.610914 | orchestrator | 2025-03-26 15:42:51.611238 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-03-26 15:42:51.612707 | orchestrator | Wednesday 26 March 2025 15:42:51 +0000 (0:00:01.552) 0:07:39.879 ******* 2025-03-26 15:42:51.761727 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:42:51.835917 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:42:51.912704 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:42:51.981381 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:42:52.051500 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:42:52.161639 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:42:52.162620 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:42:52.162948 | orchestrator | 2025-03-26 15:42:52.164384 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-03-26 15:42:52.165036 | orchestrator | Wednesday 26 March 2025 15:42:52 +0000 (0:00:00.560) 0:07:40.439 ******* 2025-03-26 15:42:54.301717 | orchestrator | ok: [testbed-manager] 2025-03-26 15:42:54.302538 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:42:54.302612 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:42:54.302629 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:42:54.302652 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:42:54.302710 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:42:54.302730 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:42:54.302991 | orchestrator | 2025-03-26 15:42:54.304439 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-03-26 15:42:54.304737 | orchestrator | Wednesday 26 March 2025 15:42:54 +0000 (0:00:02.135) 0:07:42.575 ******* 2025-03-26 15:42:55.870275 | orchestrator | ok: [testbed-manager] 2025-03-26 15:42:55.870596 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:42:55.870640 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:42:55.871588 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:42:55.871872 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:42:55.872698 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:42:55.872908 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:42:55.873366 | orchestrator | 2025-03-26 15:42:55.873653 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-03-26 15:42:55.874106 | orchestrator | Wednesday 26 March 2025 15:42:55 +0000 (0:00:01.566) 0:07:44.142 ******* 2025-03-26 15:42:57.714287 | orchestrator | ok: [testbed-manager] 2025-03-26 15:42:57.715733 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:42:57.716575 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:42:57.716613 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:42:57.720376 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:42:57.721111 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:42:57.721142 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:42:57.721580 | orchestrator | 2025-03-26 15:42:57.722523 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-03-26 15:42:59.503131 | orchestrator | Wednesday 26 March 2025 15:42:57 +0000 (0:00:01.849) 0:07:45.991 ******* 2025-03-26 15:42:59.503276 | orchestrator | ok: [testbed-manager] 2025-03-26 15:42:59.503492 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:42:59.503739 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:42:59.504136 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:42:59.504672 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:42:59.505355 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:42:59.505918 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:42:59.509376 | orchestrator | 2025-03-26 15:42:59.509479 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-03-26 15:42:59.509505 | orchestrator | Wednesday 26 March 2025 15:42:59 +0000 (0:00:01.787) 0:07:47.778 ******* 2025-03-26 15:43:00.111159 | orchestrator | ok: [testbed-manager] 2025-03-26 15:43:00.184439 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:43:00.652004 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:43:00.653728 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:43:00.655148 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:43:00.655969 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:43:00.657270 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:43:00.661114 | orchestrator | 2025-03-26 15:43:00.662096 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-03-26 15:43:00.662982 | orchestrator | Wednesday 26 March 2025 15:43:00 +0000 (0:00:01.146) 0:07:48.924 ******* 2025-03-26 15:43:00.794191 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:43:00.868089 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:43:00.945084 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:43:01.012985 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:43:01.090666 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:43:01.518273 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:43:01.519504 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:43:01.519923 | orchestrator | 2025-03-26 15:43:01.520397 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-03-26 15:43:01.521187 | orchestrator | Wednesday 26 March 2025 15:43:01 +0000 (0:00:00.869) 0:07:49.794 ******* 2025-03-26 15:43:01.672065 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:43:01.739659 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:43:01.822376 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:43:01.907842 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:43:01.978507 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:43:02.096304 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:43:02.097005 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:43:02.099526 | orchestrator | 2025-03-26 15:43:02.099612 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-03-26 15:43:02.101359 | orchestrator | Wednesday 26 March 2025 15:43:02 +0000 (0:00:00.578) 0:07:50.372 ******* 2025-03-26 15:43:02.228947 | orchestrator | ok: [testbed-manager] 2025-03-26 15:43:02.338531 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:43:02.412477 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:43:02.479924 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:43:02.576557 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:43:02.686124 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:43:02.687535 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:43:02.688575 | orchestrator | 2025-03-26 15:43:02.689597 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-03-26 15:43:02.690307 | orchestrator | Wednesday 26 March 2025 15:43:02 +0000 (0:00:00.588) 0:07:50.960 ******* 2025-03-26 15:43:02.835070 | orchestrator | ok: [testbed-manager] 2025-03-26 15:43:03.116225 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:43:03.186620 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:43:03.255159 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:43:03.350509 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:43:03.476473 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:43:03.477896 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:43:03.480788 | orchestrator | 2025-03-26 15:43:03.481512 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-03-26 15:43:03.483393 | orchestrator | Wednesday 26 March 2025 15:43:03 +0000 (0:00:00.790) 0:07:51.751 ******* 2025-03-26 15:43:03.621249 | orchestrator | ok: [testbed-manager] 2025-03-26 15:43:03.688643 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:43:03.762112 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:43:03.834542 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:43:03.933602 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:43:04.076172 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:43:04.078218 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:43:04.078885 | orchestrator | 2025-03-26 15:43:04.081506 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-03-26 15:43:09.290932 | orchestrator | Wednesday 26 March 2025 15:43:04 +0000 (0:00:00.599) 0:07:52.351 ******* 2025-03-26 15:43:09.291093 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:43:09.291621 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:43:09.296655 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:43:09.297559 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:43:09.297592 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:43:09.297613 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:43:09.298393 | orchestrator | ok: [testbed-manager] 2025-03-26 15:43:09.299481 | orchestrator | 2025-03-26 15:43:09.300028 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-03-26 15:43:09.300840 | orchestrator | Wednesday 26 March 2025 15:43:09 +0000 (0:00:05.216) 0:07:57.568 ******* 2025-03-26 15:43:09.448263 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:43:09.533269 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:43:09.695638 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:43:09.773364 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:43:09.904494 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:43:09.906568 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:43:09.907984 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:43:09.909239 | orchestrator | 2025-03-26 15:43:09.910124 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-03-26 15:43:09.911059 | orchestrator | Wednesday 26 March 2025 15:43:09 +0000 (0:00:00.609) 0:07:58.177 ******* 2025-03-26 15:43:11.025702 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 15:43:11.029696 | orchestrator | 2025-03-26 15:43:12.929781 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-03-26 15:43:12.929895 | orchestrator | Wednesday 26 March 2025 15:43:11 +0000 (0:00:01.122) 0:07:59.300 ******* 2025-03-26 15:43:12.929929 | orchestrator | ok: [testbed-manager] 2025-03-26 15:43:12.930120 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:43:12.931906 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:43:12.932244 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:43:12.933498 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:43:12.933582 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:43:12.934195 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:43:12.937372 | orchestrator | 2025-03-26 15:43:12.938417 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-03-26 15:43:12.939958 | orchestrator | Wednesday 26 March 2025 15:43:12 +0000 (0:00:01.903) 0:08:01.203 ******* 2025-03-26 15:43:14.242622 | orchestrator | ok: [testbed-manager] 2025-03-26 15:43:14.244226 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:43:14.245060 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:43:14.245410 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:43:14.246113 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:43:14.246858 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:43:14.247494 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:43:14.248110 | orchestrator | 2025-03-26 15:43:14.248840 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-03-26 15:43:14.249623 | orchestrator | Wednesday 26 March 2025 15:43:14 +0000 (0:00:01.314) 0:08:02.517 ******* 2025-03-26 15:43:15.157977 | orchestrator | ok: [testbed-manager] 2025-03-26 15:43:15.158707 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:43:15.159423 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:43:15.160357 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:43:15.161551 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:43:15.162810 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:43:15.163689 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:43:15.164463 | orchestrator | 2025-03-26 15:43:15.164974 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-03-26 15:43:15.165452 | orchestrator | Wednesday 26 March 2025 15:43:15 +0000 (0:00:00.915) 0:08:03.433 ******* 2025-03-26 15:43:17.251011 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-03-26 15:43:17.251350 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-03-26 15:43:17.252557 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-03-26 15:43:17.254473 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-03-26 15:43:17.257369 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-03-26 15:43:17.258218 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-03-26 15:43:17.259177 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-03-26 15:43:17.260692 | orchestrator | 2025-03-26 15:43:17.261517 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-03-26 15:43:17.262369 | orchestrator | Wednesday 26 March 2025 15:43:17 +0000 (0:00:02.092) 0:08:05.525 ******* 2025-03-26 15:43:18.102315 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 15:43:18.103300 | orchestrator | 2025-03-26 15:43:18.104628 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-03-26 15:43:18.104731 | orchestrator | Wednesday 26 March 2025 15:43:18 +0000 (0:00:00.852) 0:08:06.378 ******* 2025-03-26 15:43:27.467721 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:43:27.467811 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:43:27.468705 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:43:27.469696 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:43:27.470552 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:43:27.472928 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:43:27.474237 | orchestrator | changed: [testbed-manager] 2025-03-26 15:43:27.475196 | orchestrator | 2025-03-26 15:43:27.475904 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-03-26 15:43:27.477046 | orchestrator | Wednesday 26 March 2025 15:43:27 +0000 (0:00:09.366) 0:08:15.744 ******* 2025-03-26 15:43:29.565066 | orchestrator | ok: [testbed-manager] 2025-03-26 15:43:29.567021 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:43:29.567320 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:43:29.569692 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:43:29.570125 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:43:29.571101 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:43:29.572532 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:43:29.573566 | orchestrator | 2025-03-26 15:43:29.574803 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-03-26 15:43:29.575672 | orchestrator | Wednesday 26 March 2025 15:43:29 +0000 (0:00:02.095) 0:08:17.839 ******* 2025-03-26 15:43:30.930510 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:43:30.930655 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:43:30.931619 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:43:30.932099 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:43:30.932149 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:43:30.932290 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:43:30.932771 | orchestrator | 2025-03-26 15:43:30.933521 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-03-26 15:43:30.934268 | orchestrator | Wednesday 26 March 2025 15:43:30 +0000 (0:00:01.367) 0:08:19.207 ******* 2025-03-26 15:43:32.498378 | orchestrator | changed: [testbed-manager] 2025-03-26 15:43:32.498611 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:43:32.499190 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:43:32.501582 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:43:32.502217 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:43:32.502248 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:43:32.502264 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:43:32.502285 | orchestrator | 2025-03-26 15:43:32.502819 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-03-26 15:43:32.503241 | orchestrator | 2025-03-26 15:43:32.503635 | orchestrator | TASK [Include hardening role] ************************************************** 2025-03-26 15:43:32.504166 | orchestrator | Wednesday 26 March 2025 15:43:32 +0000 (0:00:01.566) 0:08:20.774 ******* 2025-03-26 15:43:32.642138 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:43:32.703387 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:43:32.769914 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:43:32.843256 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:43:32.915003 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:43:33.035505 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:43:33.036035 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:43:33.036067 | orchestrator | 2025-03-26 15:43:33.036090 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-03-26 15:43:33.037261 | orchestrator | 2025-03-26 15:43:33.038309 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-03-26 15:43:33.038545 | orchestrator | Wednesday 26 March 2025 15:43:33 +0000 (0:00:00.532) 0:08:21.306 ******* 2025-03-26 15:43:34.478238 | orchestrator | changed: [testbed-manager] 2025-03-26 15:43:34.478837 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:43:34.479631 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:43:34.480216 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:43:34.480871 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:43:34.483313 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:43:36.049405 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:43:36.049572 | orchestrator | 2025-03-26 15:43:36.049592 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-03-26 15:43:36.049607 | orchestrator | Wednesday 26 March 2025 15:43:34 +0000 (0:00:01.449) 0:08:22.755 ******* 2025-03-26 15:43:36.049638 | orchestrator | ok: [testbed-manager] 2025-03-26 15:43:36.049829 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:43:36.052442 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:43:36.053016 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:43:36.054468 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:43:36.055587 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:43:36.056829 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:43:36.058625 | orchestrator | 2025-03-26 15:43:36.058920 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-03-26 15:43:36.060002 | orchestrator | Wednesday 26 March 2025 15:43:36 +0000 (0:00:01.568) 0:08:24.323 ******* 2025-03-26 15:43:36.182657 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:43:36.509453 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:43:36.583797 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:43:36.646598 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:43:36.721136 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:43:37.258364 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:43:37.258558 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:43:37.258990 | orchestrator | 2025-03-26 15:43:37.259370 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-03-26 15:43:37.260128 | orchestrator | Wednesday 26 March 2025 15:43:37 +0000 (0:00:01.209) 0:08:25.533 ******* 2025-03-26 15:43:38.681905 | orchestrator | changed: [testbed-manager] 2025-03-26 15:43:38.683228 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:43:38.686247 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:43:38.687243 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:43:38.690826 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:43:38.695912 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:43:38.697274 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:43:38.698549 | orchestrator | 2025-03-26 15:43:38.699551 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-03-26 15:43:38.700256 | orchestrator | 2025-03-26 15:43:38.701024 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-03-26 15:43:38.701792 | orchestrator | Wednesday 26 March 2025 15:43:38 +0000 (0:00:01.423) 0:08:26.956 ******* 2025-03-26 15:43:39.618503 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 15:43:39.618658 | orchestrator | 2025-03-26 15:43:39.619996 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-03-26 15:43:39.620830 | orchestrator | Wednesday 26 March 2025 15:43:39 +0000 (0:00:00.939) 0:08:27.896 ******* 2025-03-26 15:43:40.055310 | orchestrator | ok: [testbed-manager] 2025-03-26 15:43:40.733160 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:43:40.733893 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:43:40.735013 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:43:40.736398 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:43:40.736894 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:43:40.737968 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:43:40.739024 | orchestrator | 2025-03-26 15:43:40.739824 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-03-26 15:43:40.740270 | orchestrator | Wednesday 26 March 2025 15:43:40 +0000 (0:00:01.116) 0:08:29.012 ******* 2025-03-26 15:43:41.906891 | orchestrator | changed: [testbed-manager] 2025-03-26 15:43:41.908650 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:43:41.911532 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:43:41.911909 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:43:41.911939 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:43:41.911958 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:43:41.913158 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:43:41.914170 | orchestrator | 2025-03-26 15:43:41.915206 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-03-26 15:43:41.916579 | orchestrator | Wednesday 26 March 2025 15:43:41 +0000 (0:00:01.169) 0:08:30.182 ******* 2025-03-26 15:43:43.028720 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 15:43:43.028890 | orchestrator | 2025-03-26 15:43:43.028919 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-03-26 15:43:43.032560 | orchestrator | Wednesday 26 March 2025 15:43:43 +0000 (0:00:01.121) 0:08:31.304 ******* 2025-03-26 15:43:43.924268 | orchestrator | ok: [testbed-manager] 2025-03-26 15:43:43.925495 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:43:43.925710 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:43:43.926185 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:43:43.928354 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:43:43.928605 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:43:43.929231 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:43:43.930239 | orchestrator | 2025-03-26 15:43:43.931124 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-03-26 15:43:43.932193 | orchestrator | Wednesday 26 March 2025 15:43:43 +0000 (0:00:00.894) 0:08:32.198 ******* 2025-03-26 15:43:45.094759 | orchestrator | changed: [testbed-manager] 2025-03-26 15:43:45.095317 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:43:45.097219 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:43:45.097591 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:43:45.098590 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:43:45.099478 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:43:45.100046 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:43:45.100984 | orchestrator | 2025-03-26 15:43:45.102083 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-26 15:43:45.102142 | orchestrator | 2025-03-26 15:43:45 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-26 15:43:45.102963 | orchestrator | 2025-03-26 15:43:45 | INFO  | Please wait and do not abort execution. 2025-03-26 15:43:45.102998 | orchestrator | testbed-manager : ok=160  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-03-26 15:43:45.103384 | orchestrator | testbed-node-0 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-03-26 15:43:45.104236 | orchestrator | testbed-node-1 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-03-26 15:43:45.104877 | orchestrator | testbed-node-2 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-03-26 15:43:45.105671 | orchestrator | testbed-node-3 : ok=167  changed=62  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-03-26 15:43:45.106694 | orchestrator | testbed-node-4 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-03-26 15:43:45.107284 | orchestrator | testbed-node-5 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-03-26 15:43:45.107800 | orchestrator | 2025-03-26 15:43:45.108246 | orchestrator | Wednesday 26 March 2025 15:43:45 +0000 (0:00:01.174) 0:08:33.373 ******* 2025-03-26 15:43:45.109050 | orchestrator | =============================================================================== 2025-03-26 15:43:45.109433 | orchestrator | osism.commons.packages : Install required packages --------------------- 82.51s 2025-03-26 15:43:45.110199 | orchestrator | osism.commons.packages : Download required packages -------------------- 38.91s 2025-03-26 15:43:45.110930 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.71s 2025-03-26 15:43:45.111544 | orchestrator | osism.commons.repository : Update package cache ------------------------ 14.22s 2025-03-26 15:43:45.111893 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 13.64s 2025-03-26 15:43:45.112357 | orchestrator | osism.services.docker : Install docker package ------------------------- 13.57s 2025-03-26 15:43:45.112877 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.51s 2025-03-26 15:43:45.113282 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.05s 2025-03-26 15:43:45.113900 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.34s 2025-03-26 15:43:45.114467 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.39s 2025-03-26 15:43:45.114561 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.37s 2025-03-26 15:43:45.115280 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 9.19s 2025-03-26 15:43:45.115577 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.91s 2025-03-26 15:43:45.116247 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.48s 2025-03-26 15:43:45.116806 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.45s 2025-03-26 15:43:45.117311 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.16s 2025-03-26 15:43:45.118119 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.08s 2025-03-26 15:43:45.118456 | orchestrator | osism.services.docker : Ensure that some packages are not installed ----- 6.77s 2025-03-26 15:43:45.118832 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.65s 2025-03-26 15:43:45.119572 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.72s 2025-03-26 15:43:45.931135 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-03-26 15:43:48.270190 | orchestrator | + osism apply network 2025-03-26 15:43:48.270288 | orchestrator | 2025-03-26 15:43:48 | INFO  | Task b08aaf31-f421-40c5-b06d-665e1beb7425 (network) was prepared for execution. 2025-03-26 15:43:51.956681 | orchestrator | 2025-03-26 15:43:48 | INFO  | It takes a moment until task b08aaf31-f421-40c5-b06d-665e1beb7425 (network) has been started and output is visible here. 2025-03-26 15:43:51.956757 | orchestrator | 2025-03-26 15:43:51.957647 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-03-26 15:43:51.960335 | orchestrator | 2025-03-26 15:43:52.112798 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-03-26 15:43:52.112887 | orchestrator | Wednesday 26 March 2025 15:43:51 +0000 (0:00:00.218) 0:00:00.218 ******* 2025-03-26 15:43:52.112913 | orchestrator | ok: [testbed-manager] 2025-03-26 15:43:52.201160 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:43:52.307008 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:43:52.390518 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:43:52.475169 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:43:52.728882 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:43:52.729700 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:43:52.730749 | orchestrator | 2025-03-26 15:43:52.733095 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-03-26 15:43:54.075612 | orchestrator | Wednesday 26 March 2025 15:43:52 +0000 (0:00:00.771) 0:00:00.990 ******* 2025-03-26 15:43:54.075754 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-26 15:43:54.075890 | orchestrator | 2025-03-26 15:43:54.076476 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-03-26 15:43:54.080238 | orchestrator | Wednesday 26 March 2025 15:43:54 +0000 (0:00:01.345) 0:00:02.336 ******* 2025-03-26 15:43:56.336839 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:43:56.337149 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:43:56.337354 | orchestrator | ok: [testbed-manager] 2025-03-26 15:43:56.337894 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:43:56.338450 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:43:56.339042 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:43:56.342279 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:43:58.223306 | orchestrator | 2025-03-26 15:43:58.223395 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-03-26 15:43:58.223451 | orchestrator | Wednesday 26 March 2025 15:43:56 +0000 (0:00:02.260) 0:00:04.596 ******* 2025-03-26 15:43:58.223479 | orchestrator | ok: [testbed-manager] 2025-03-26 15:43:58.223705 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:43:58.227792 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:43:58.228718 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:43:58.229666 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:43:58.232503 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:43:58.232779 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:43:58.232985 | orchestrator | 2025-03-26 15:43:58.233247 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-03-26 15:43:58.233571 | orchestrator | Wednesday 26 March 2025 15:43:58 +0000 (0:00:01.886) 0:00:06.482 ******* 2025-03-26 15:43:58.757003 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-03-26 15:43:58.757181 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-03-26 15:43:59.408637 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-03-26 15:43:59.409382 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-03-26 15:43:59.409449 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-03-26 15:43:59.411601 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-03-26 15:43:59.412828 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-03-26 15:43:59.413775 | orchestrator | 2025-03-26 15:43:59.415272 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-03-26 15:43:59.415717 | orchestrator | Wednesday 26 March 2025 15:43:59 +0000 (0:00:01.188) 0:00:07.671 ******* 2025-03-26 15:44:01.194488 | orchestrator | ok: [testbed-manager -> localhost] 2025-03-26 15:44:01.194920 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-03-26 15:44:01.196639 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-03-26 15:44:01.197607 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-03-26 15:44:01.205498 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-03-26 15:44:01.206707 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-03-26 15:44:01.208274 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-03-26 15:44:01.208502 | orchestrator | 2025-03-26 15:44:01.208909 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-03-26 15:44:01.209566 | orchestrator | Wednesday 26 March 2025 15:44:01 +0000 (0:00:01.782) 0:00:09.453 ******* 2025-03-26 15:44:03.018621 | orchestrator | changed: [testbed-manager] 2025-03-26 15:44:03.019012 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:44:03.022279 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:44:03.024868 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:44:03.026431 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:44:03.027316 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:44:03.028306 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:44:03.029153 | orchestrator | 2025-03-26 15:44:03.030065 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-03-26 15:44:03.031213 | orchestrator | Wednesday 26 March 2025 15:44:03 +0000 (0:00:01.824) 0:00:11.278 ******* 2025-03-26 15:44:03.641006 | orchestrator | ok: [testbed-manager -> localhost] 2025-03-26 15:44:04.127849 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-03-26 15:44:04.128973 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-03-26 15:44:04.133291 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-03-26 15:44:04.135027 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-03-26 15:44:04.137023 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-03-26 15:44:04.137892 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-03-26 15:44:04.138694 | orchestrator | 2025-03-26 15:44:04.139577 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-03-26 15:44:04.140508 | orchestrator | Wednesday 26 March 2025 15:44:04 +0000 (0:00:01.112) 0:00:12.391 ******* 2025-03-26 15:44:04.597014 | orchestrator | ok: [testbed-manager] 2025-03-26 15:44:04.792707 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:44:05.342943 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:44:05.343107 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:44:05.347500 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:44:05.348169 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:44:05.348627 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:44:05.350501 | orchestrator | 2025-03-26 15:44:05.351276 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-03-26 15:44:05.352014 | orchestrator | Wednesday 26 March 2025 15:44:05 +0000 (0:00:01.210) 0:00:13.601 ******* 2025-03-26 15:44:05.519727 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:44:05.607502 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:44:05.695113 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:44:05.799728 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:44:05.903682 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:44:06.267767 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:44:06.271194 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:44:06.271497 | orchestrator | 2025-03-26 15:44:06.272547 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-03-26 15:44:06.273389 | orchestrator | Wednesday 26 March 2025 15:44:06 +0000 (0:00:00.925) 0:00:14.526 ******* 2025-03-26 15:44:08.431396 | orchestrator | ok: [testbed-manager] 2025-03-26 15:44:08.431606 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:44:08.432308 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:44:08.433601 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:44:08.434095 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:44:08.436060 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:44:08.437909 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:44:08.438388 | orchestrator | 2025-03-26 15:44:08.438715 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-03-26 15:44:08.439403 | orchestrator | Wednesday 26 March 2025 15:44:08 +0000 (0:00:02.168) 0:00:16.695 ******* 2025-03-26 15:44:09.279574 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-03-26 15:44:10.551147 | orchestrator | changed: [testbed-node-0] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-03-26 15:44:10.552860 | orchestrator | changed: [testbed-node-1] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-03-26 15:44:10.553231 | orchestrator | changed: [testbed-node-2] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-03-26 15:44:10.554502 | orchestrator | changed: [testbed-node-3] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-03-26 15:44:10.555140 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-03-26 15:44:10.556346 | orchestrator | changed: [testbed-node-4] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-03-26 15:44:10.556628 | orchestrator | changed: [testbed-node-5] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-03-26 15:44:10.557355 | orchestrator | 2025-03-26 15:44:10.557923 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-03-26 15:44:10.560015 | orchestrator | Wednesday 26 March 2025 15:44:10 +0000 (0:00:02.112) 0:00:18.807 ******* 2025-03-26 15:44:12.265084 | orchestrator | ok: [testbed-manager] 2025-03-26 15:44:12.265207 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:44:12.266258 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:44:12.268162 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:44:12.269486 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:44:12.270482 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:44:12.271660 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:44:12.273731 | orchestrator | 2025-03-26 15:44:12.274714 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-03-26 15:44:12.275365 | orchestrator | Wednesday 26 March 2025 15:44:12 +0000 (0:00:01.720) 0:00:20.527 ******* 2025-03-26 15:44:13.808644 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-26 15:44:13.809160 | orchestrator | 2025-03-26 15:44:13.810142 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-03-26 15:44:13.811000 | orchestrator | Wednesday 26 March 2025 15:44:13 +0000 (0:00:01.543) 0:00:22.070 ******* 2025-03-26 15:44:14.374184 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:44:14.860509 | orchestrator | ok: [testbed-manager] 2025-03-26 15:44:14.860713 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:44:14.862443 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:44:14.862844 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:44:14.864563 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:44:14.865137 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:44:14.865846 | orchestrator | 2025-03-26 15:44:14.866471 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-03-26 15:44:14.867334 | orchestrator | Wednesday 26 March 2025 15:44:14 +0000 (0:00:01.053) 0:00:23.123 ******* 2025-03-26 15:44:15.031265 | orchestrator | ok: [testbed-manager] 2025-03-26 15:44:15.122629 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:44:15.399333 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:44:15.507491 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:44:15.598007 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:44:15.753627 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:44:15.754088 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:44:15.754679 | orchestrator | 2025-03-26 15:44:15.758438 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-03-26 15:44:16.172586 | orchestrator | Wednesday 26 March 2025 15:44:15 +0000 (0:00:00.890) 0:00:24.014 ******* 2025-03-26 15:44:16.172655 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-03-26 15:44:16.172945 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-03-26 15:44:16.265789 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-03-26 15:44:16.266132 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-03-26 15:44:16.866157 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-03-26 15:44:16.866573 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-03-26 15:44:16.866607 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-03-26 15:44:16.866629 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-03-26 15:44:16.866902 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-03-26 15:44:16.869264 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-03-26 15:44:16.869604 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-03-26 15:44:16.871152 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-03-26 15:44:16.872405 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-03-26 15:44:16.873222 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-03-26 15:44:16.874357 | orchestrator | 2025-03-26 15:44:16.875043 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-03-26 15:44:16.875811 | orchestrator | Wednesday 26 March 2025 15:44:16 +0000 (0:00:01.116) 0:00:25.130 ******* 2025-03-26 15:44:17.248701 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:44:17.329972 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:44:17.430308 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:44:17.517704 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:44:17.604724 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:44:18.881176 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:44:18.882263 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:44:18.883207 | orchestrator | 2025-03-26 15:44:18.884177 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-03-26 15:44:18.885246 | orchestrator | Wednesday 26 March 2025 15:44:18 +0000 (0:00:02.011) 0:00:27.142 ******* 2025-03-26 15:44:19.049156 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:44:19.153350 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:44:19.449643 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:44:19.555504 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:44:19.643614 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:44:19.684289 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:44:19.684746 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:44:19.684799 | orchestrator | 2025-03-26 15:44:19.685775 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-26 15:44:19.686241 | orchestrator | 2025-03-26 15:44:19 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-26 15:44:19.687298 | orchestrator | 2025-03-26 15:44:19 | INFO  | Please wait and do not abort execution. 2025-03-26 15:44:19.688129 | orchestrator | testbed-manager : ok=16  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-26 15:44:19.691044 | orchestrator | testbed-node-0 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-26 15:44:19.691636 | orchestrator | testbed-node-1 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-26 15:44:19.691660 | orchestrator | testbed-node-2 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-26 15:44:19.691675 | orchestrator | testbed-node-3 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-26 15:44:19.691689 | orchestrator | testbed-node-4 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-26 15:44:19.691704 | orchestrator | testbed-node-5 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-26 15:44:19.691721 | orchestrator | 2025-03-26 15:44:19.692612 | orchestrator | Wednesday 26 March 2025 15:44:19 +0000 (0:00:00.806) 0:00:27.949 ******* 2025-03-26 15:44:19.693348 | orchestrator | =============================================================================== 2025-03-26 15:44:19.694132 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.26s 2025-03-26 15:44:19.694887 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.17s 2025-03-26 15:44:19.695582 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 2.11s 2025-03-26 15:44:19.695941 | orchestrator | osism.commons.network : Include dummy interfaces ------------------------ 2.01s 2025-03-26 15:44:19.696585 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.89s 2025-03-26 15:44:19.697387 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.82s 2025-03-26 15:44:19.697636 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 1.78s 2025-03-26 15:44:19.698483 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.72s 2025-03-26 15:44:19.698757 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.54s 2025-03-26 15:44:19.699463 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.35s 2025-03-26 15:44:19.700001 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.21s 2025-03-26 15:44:19.700394 | orchestrator | osism.commons.network : Create required directories --------------------- 1.19s 2025-03-26 15:44:19.701010 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.12s 2025-03-26 15:44:19.701477 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.11s 2025-03-26 15:44:19.702087 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.05s 2025-03-26 15:44:19.702766 | orchestrator | osism.commons.network : Copy interfaces file ---------------------------- 0.93s 2025-03-26 15:44:19.703526 | orchestrator | osism.commons.network : Set network_configured_files fact --------------- 0.89s 2025-03-26 15:44:19.704171 | orchestrator | osism.commons.network : Netplan configuration changed ------------------- 0.81s 2025-03-26 15:44:19.704718 | orchestrator | osism.commons.network : Gather variables for each operating system ------ 0.77s 2025-03-26 15:44:20.297572 | orchestrator | + osism apply wireguard 2025-03-26 15:44:21.874278 | orchestrator | 2025-03-26 15:44:21 | INFO  | Task 00665e8f-f598-457c-a9cb-c1f0c9e47067 (wireguard) was prepared for execution. 2025-03-26 15:44:25.172639 | orchestrator | 2025-03-26 15:44:21 | INFO  | It takes a moment until task 00665e8f-f598-457c-a9cb-c1f0c9e47067 (wireguard) has been started and output is visible here. 2025-03-26 15:44:25.172792 | orchestrator | 2025-03-26 15:44:25.174474 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-03-26 15:44:25.174514 | orchestrator | 2025-03-26 15:44:25.176747 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-03-26 15:44:25.177577 | orchestrator | Wednesday 26 March 2025 15:44:25 +0000 (0:00:00.178) 0:00:00.178 ******* 2025-03-26 15:44:26.849362 | orchestrator | ok: [testbed-manager] 2025-03-26 15:44:26.849625 | orchestrator | 2025-03-26 15:44:26.851242 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-03-26 15:44:34.072381 | orchestrator | Wednesday 26 March 2025 15:44:26 +0000 (0:00:01.677) 0:00:01.855 ******* 2025-03-26 15:44:34.072512 | orchestrator | changed: [testbed-manager] 2025-03-26 15:44:34.072583 | orchestrator | 2025-03-26 15:44:34.073452 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-03-26 15:44:34.073684 | orchestrator | Wednesday 26 March 2025 15:44:34 +0000 (0:00:07.222) 0:00:09.078 ******* 2025-03-26 15:44:34.688918 | orchestrator | changed: [testbed-manager] 2025-03-26 15:44:34.690119 | orchestrator | 2025-03-26 15:44:34.690217 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-03-26 15:44:34.692962 | orchestrator | Wednesday 26 March 2025 15:44:34 +0000 (0:00:00.618) 0:00:09.697 ******* 2025-03-26 15:44:35.188112 | orchestrator | changed: [testbed-manager] 2025-03-26 15:44:35.189668 | orchestrator | 2025-03-26 15:44:35.191598 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-03-26 15:44:35.192115 | orchestrator | Wednesday 26 March 2025 15:44:35 +0000 (0:00:00.497) 0:00:10.194 ******* 2025-03-26 15:44:35.740472 | orchestrator | ok: [testbed-manager] 2025-03-26 15:44:35.741021 | orchestrator | 2025-03-26 15:44:35.741539 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-03-26 15:44:35.742689 | orchestrator | Wednesday 26 March 2025 15:44:35 +0000 (0:00:00.554) 0:00:10.749 ******* 2025-03-26 15:44:36.345736 | orchestrator | ok: [testbed-manager] 2025-03-26 15:44:36.346703 | orchestrator | 2025-03-26 15:44:36.347283 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-03-26 15:44:36.348376 | orchestrator | Wednesday 26 March 2025 15:44:36 +0000 (0:00:00.603) 0:00:11.352 ******* 2025-03-26 15:44:36.798171 | orchestrator | ok: [testbed-manager] 2025-03-26 15:44:36.798327 | orchestrator | 2025-03-26 15:44:36.799111 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-03-26 15:44:36.799720 | orchestrator | Wednesday 26 March 2025 15:44:36 +0000 (0:00:00.454) 0:00:11.806 ******* 2025-03-26 15:44:38.148251 | orchestrator | changed: [testbed-manager] 2025-03-26 15:44:38.148765 | orchestrator | 2025-03-26 15:44:38.148805 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-03-26 15:44:38.150378 | orchestrator | Wednesday 26 March 2025 15:44:38 +0000 (0:00:01.347) 0:00:13.154 ******* 2025-03-26 15:44:39.171930 | orchestrator | changed: [testbed-manager] => (item=None) 2025-03-26 15:44:39.175259 | orchestrator | changed: [testbed-manager] 2025-03-26 15:44:39.177538 | orchestrator | 2025-03-26 15:44:39.177575 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-03-26 15:44:41.079892 | orchestrator | Wednesday 26 March 2025 15:44:39 +0000 (0:00:01.025) 0:00:14.179 ******* 2025-03-26 15:44:41.080017 | orchestrator | changed: [testbed-manager] 2025-03-26 15:44:41.081366 | orchestrator | 2025-03-26 15:44:41.082524 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-03-26 15:44:41.083058 | orchestrator | Wednesday 26 March 2025 15:44:41 +0000 (0:00:01.907) 0:00:16.087 ******* 2025-03-26 15:44:42.133810 | orchestrator | changed: [testbed-manager] 2025-03-26 15:44:42.134573 | orchestrator | 2025-03-26 15:44:42.135311 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-26 15:44:42.135341 | orchestrator | 2025-03-26 15:44:42 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-26 15:44:42.136672 | orchestrator | 2025-03-26 15:44:42 | INFO  | Please wait and do not abort execution. 2025-03-26 15:44:42.136702 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-26 15:44:42.138473 | orchestrator | 2025-03-26 15:44:42.138840 | orchestrator | Wednesday 26 March 2025 15:44:42 +0000 (0:00:01.054) 0:00:17.141 ******* 2025-03-26 15:44:42.138867 | orchestrator | =============================================================================== 2025-03-26 15:44:42.138886 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.22s 2025-03-26 15:44:42.139571 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.91s 2025-03-26 15:44:42.140251 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.68s 2025-03-26 15:44:42.140716 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.35s 2025-03-26 15:44:42.143230 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 1.05s 2025-03-26 15:44:42.143541 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 1.03s 2025-03-26 15:44:42.145610 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.62s 2025-03-26 15:44:42.145985 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.60s 2025-03-26 15:44:42.146878 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.55s 2025-03-26 15:44:42.147464 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.50s 2025-03-26 15:44:42.147868 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.45s 2025-03-26 15:44:42.702234 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-03-26 15:44:42.737502 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-03-26 15:44:42.806503 | orchestrator | Dload Upload Total Spent Left Speed 2025-03-26 15:44:42.806575 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 216 0 --:--:-- --:--:-- --:--:-- 220 2025-03-26 15:44:42.822858 | orchestrator | + osism apply --environment custom workarounds 2025-03-26 15:44:44.272712 | orchestrator | 2025-03-26 15:44:44 | INFO  | Trying to run play workarounds in environment custom 2025-03-26 15:44:44.323733 | orchestrator | 2025-03-26 15:44:44 | INFO  | Task 9ab0e478-17f9-4a9c-be4e-70d61cec70e6 (workarounds) was prepared for execution. 2025-03-26 15:44:47.757800 | orchestrator | 2025-03-26 15:44:44 | INFO  | It takes a moment until task 9ab0e478-17f9-4a9c-be4e-70d61cec70e6 (workarounds) has been started and output is visible here. 2025-03-26 15:44:47.757917 | orchestrator | 2025-03-26 15:44:47.759052 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-26 15:44:47.759086 | orchestrator | 2025-03-26 15:44:47.759449 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-03-26 15:44:47.760011 | orchestrator | Wednesday 26 March 2025 15:44:47 +0000 (0:00:00.147) 0:00:00.147 ******* 2025-03-26 15:44:47.941040 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-03-26 15:44:48.034340 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-03-26 15:44:48.122819 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-03-26 15:44:48.212216 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-03-26 15:44:48.317669 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-03-26 15:44:48.641635 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-03-26 15:44:48.642880 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-03-26 15:44:48.642925 | orchestrator | 2025-03-26 15:44:48.643263 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-03-26 15:44:48.644120 | orchestrator | 2025-03-26 15:44:48.644514 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-03-26 15:44:48.644945 | orchestrator | Wednesday 26 March 2025 15:44:48 +0000 (0:00:00.881) 0:00:01.028 ******* 2025-03-26 15:44:51.722980 | orchestrator | ok: [testbed-manager] 2025-03-26 15:44:51.723203 | orchestrator | 2025-03-26 15:44:51.723235 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-03-26 15:44:51.724546 | orchestrator | 2025-03-26 15:44:51.726112 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-03-26 15:44:51.726147 | orchestrator | Wednesday 26 March 2025 15:44:51 +0000 (0:00:03.080) 0:00:04.109 ******* 2025-03-26 15:44:53.703119 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:44:53.703592 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:44:53.704150 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:44:53.705988 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:44:53.706887 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:44:53.707926 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:44:53.708527 | orchestrator | 2025-03-26 15:44:53.709852 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-03-26 15:44:53.710285 | orchestrator | 2025-03-26 15:44:53.711185 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-03-26 15:44:53.711680 | orchestrator | Wednesday 26 March 2025 15:44:53 +0000 (0:00:01.981) 0:00:06.090 ******* 2025-03-26 15:44:55.312305 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-03-26 15:44:55.314493 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-03-26 15:44:55.314548 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-03-26 15:44:55.314564 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-03-26 15:44:55.314578 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-03-26 15:44:55.314592 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-03-26 15:44:55.314636 | orchestrator | 2025-03-26 15:44:55.314652 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-03-26 15:44:55.314676 | orchestrator | Wednesday 26 March 2025 15:44:55 +0000 (0:00:01.606) 0:00:07.696 ******* 2025-03-26 15:44:58.569079 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:44:58.569496 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:44:58.571704 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:44:58.574171 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:44:58.574208 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:44:58.575376 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:44:58.576440 | orchestrator | 2025-03-26 15:44:58.577224 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-03-26 15:44:58.578158 | orchestrator | Wednesday 26 March 2025 15:44:58 +0000 (0:00:03.261) 0:00:10.958 ******* 2025-03-26 15:44:58.722725 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:44:58.803998 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:44:58.888227 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:44:59.149376 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:44:59.332799 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:44:59.332900 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:44:59.336059 | orchestrator | 2025-03-26 15:44:59.336855 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-03-26 15:44:59.338497 | orchestrator | 2025-03-26 15:44:59.340236 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-03-26 15:44:59.341019 | orchestrator | Wednesday 26 March 2025 15:44:59 +0000 (0:00:00.759) 0:00:11.717 ******* 2025-03-26 15:45:01.102348 | orchestrator | changed: [testbed-manager] 2025-03-26 15:45:01.103555 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:45:01.103594 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:45:01.104056 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:45:01.104165 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:45:01.104574 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:45:01.105937 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:45:01.106482 | orchestrator | 2025-03-26 15:45:01.106978 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-03-26 15:45:01.107649 | orchestrator | Wednesday 26 March 2025 15:45:01 +0000 (0:00:01.774) 0:00:13.492 ******* 2025-03-26 15:45:02.706268 | orchestrator | changed: [testbed-manager] 2025-03-26 15:45:02.706486 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:45:02.707837 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:45:02.708791 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:45:02.708819 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:45:02.708839 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:45:02.709278 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:45:02.710105 | orchestrator | 2025-03-26 15:45:02.710385 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-03-26 15:45:02.711216 | orchestrator | Wednesday 26 March 2025 15:45:02 +0000 (0:00:01.601) 0:00:15.093 ******* 2025-03-26 15:45:04.209597 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:45:04.210219 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:45:04.211783 | orchestrator | ok: [testbed-manager] 2025-03-26 15:45:04.213761 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:45:04.214119 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:45:04.216759 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:45:04.217184 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:45:04.217229 | orchestrator | 2025-03-26 15:45:04.218534 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-03-26 15:45:04.221040 | orchestrator | Wednesday 26 March 2025 15:45:04 +0000 (0:00:01.503) 0:00:16.597 ******* 2025-03-26 15:45:06.263050 | orchestrator | changed: [testbed-manager] 2025-03-26 15:45:06.263646 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:45:06.264715 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:45:06.266859 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:45:06.267908 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:45:06.269042 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:45:06.269749 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:45:06.270469 | orchestrator | 2025-03-26 15:45:06.271598 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-03-26 15:45:06.271846 | orchestrator | Wednesday 26 March 2025 15:45:06 +0000 (0:00:02.055) 0:00:18.653 ******* 2025-03-26 15:45:06.434002 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:45:06.515248 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:45:06.603846 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:45:06.684003 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:45:06.955086 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:45:07.111324 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:45:07.112748 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:45:07.117627 | orchestrator | 2025-03-26 15:45:07.118552 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-03-26 15:45:07.119169 | orchestrator | 2025-03-26 15:45:07.120452 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-03-26 15:45:07.121296 | orchestrator | Wednesday 26 March 2025 15:45:07 +0000 (0:00:00.847) 0:00:19.500 ******* 2025-03-26 15:45:09.820947 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:45:09.821854 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:45:09.824834 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:45:09.825902 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:45:09.827502 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:45:09.828724 | orchestrator | ok: [testbed-manager] 2025-03-26 15:45:09.830130 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:45:09.831566 | orchestrator | 2025-03-26 15:45:09.833963 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-26 15:45:09.834211 | orchestrator | 2025-03-26 15:45:09 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-26 15:45:09.834294 | orchestrator | 2025-03-26 15:45:09 | INFO  | Please wait and do not abort execution. 2025-03-26 15:45:09.835176 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-26 15:45:09.836127 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-26 15:45:09.836986 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-26 15:45:09.837660 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-26 15:45:09.838106 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-26 15:45:09.838910 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-26 15:45:09.839390 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-26 15:45:09.840521 | orchestrator | 2025-03-26 15:45:09.840814 | orchestrator | Wednesday 26 March 2025 15:45:09 +0000 (0:00:02.709) 0:00:22.210 ******* 2025-03-26 15:45:09.841677 | orchestrator | =============================================================================== 2025-03-26 15:45:09.842066 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.26s 2025-03-26 15:45:09.842659 | orchestrator | Apply netplan configuration --------------------------------------------- 3.08s 2025-03-26 15:45:09.843147 | orchestrator | Install python3-docker -------------------------------------------------- 2.71s 2025-03-26 15:45:09.843450 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 2.06s 2025-03-26 15:45:09.844150 | orchestrator | Apply netplan configuration --------------------------------------------- 1.98s 2025-03-26 15:45:09.844394 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.77s 2025-03-26 15:45:09.845108 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.61s 2025-03-26 15:45:09.845437 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.60s 2025-03-26 15:45:09.845980 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.50s 2025-03-26 15:45:09.846238 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.88s 2025-03-26 15:45:09.846948 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.85s 2025-03-26 15:45:09.847177 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.76s 2025-03-26 15:45:10.506109 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-03-26 15:45:12.085380 | orchestrator | 2025-03-26 15:45:12 | INFO  | Task 5e2fc104-26fb-4b1f-a417-5f4e585b63fe (reboot) was prepared for execution. 2025-03-26 15:45:15.432785 | orchestrator | 2025-03-26 15:45:12 | INFO  | It takes a moment until task 5e2fc104-26fb-4b1f-a417-5f4e585b63fe (reboot) has been started and output is visible here. 2025-03-26 15:45:15.432980 | orchestrator | 2025-03-26 15:45:15.433062 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-03-26 15:45:15.433085 | orchestrator | 2025-03-26 15:45:15.433614 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-03-26 15:45:15.434102 | orchestrator | Wednesday 26 March 2025 15:45:15 +0000 (0:00:00.175) 0:00:00.175 ******* 2025-03-26 15:45:15.541763 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:45:15.542918 | orchestrator | 2025-03-26 15:45:15.543726 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-03-26 15:45:15.544641 | orchestrator | Wednesday 26 March 2025 15:45:15 +0000 (0:00:00.111) 0:00:00.286 ******* 2025-03-26 15:45:16.518641 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:45:16.520008 | orchestrator | 2025-03-26 15:45:16.520962 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-03-26 15:45:16.521715 | orchestrator | Wednesday 26 March 2025 15:45:16 +0000 (0:00:00.976) 0:00:01.263 ******* 2025-03-26 15:45:16.642688 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:45:16.644742 | orchestrator | 2025-03-26 15:45:16.645809 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-03-26 15:45:16.645847 | orchestrator | 2025-03-26 15:45:16.646638 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-03-26 15:45:16.647390 | orchestrator | Wednesday 26 March 2025 15:45:16 +0000 (0:00:00.123) 0:00:01.386 ******* 2025-03-26 15:45:16.758274 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:45:16.758709 | orchestrator | 2025-03-26 15:45:16.759947 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-03-26 15:45:16.761368 | orchestrator | Wednesday 26 March 2025 15:45:16 +0000 (0:00:00.116) 0:00:01.502 ******* 2025-03-26 15:45:17.517587 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:45:17.517752 | orchestrator | 2025-03-26 15:45:17.519898 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-03-26 15:45:17.520344 | orchestrator | Wednesday 26 March 2025 15:45:17 +0000 (0:00:00.758) 0:00:02.261 ******* 2025-03-26 15:45:17.639487 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:45:17.640066 | orchestrator | 2025-03-26 15:45:17.642234 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-03-26 15:45:17.642618 | orchestrator | 2025-03-26 15:45:17.643927 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-03-26 15:45:17.645051 | orchestrator | Wednesday 26 March 2025 15:45:17 +0000 (0:00:00.120) 0:00:02.382 ******* 2025-03-26 15:45:17.796488 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:45:17.797045 | orchestrator | 2025-03-26 15:45:17.797885 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-03-26 15:45:17.798737 | orchestrator | Wednesday 26 March 2025 15:45:17 +0000 (0:00:00.159) 0:00:02.541 ******* 2025-03-26 15:45:18.648868 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:45:18.649410 | orchestrator | 2025-03-26 15:45:18.650574 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-03-26 15:45:18.651177 | orchestrator | Wednesday 26 March 2025 15:45:18 +0000 (0:00:00.851) 0:00:03.393 ******* 2025-03-26 15:45:18.776872 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:45:18.777015 | orchestrator | 2025-03-26 15:45:18.777041 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-03-26 15:45:18.777358 | orchestrator | 2025-03-26 15:45:18.778367 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-03-26 15:45:18.778560 | orchestrator | Wednesday 26 March 2025 15:45:18 +0000 (0:00:00.125) 0:00:03.518 ******* 2025-03-26 15:45:18.885689 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:45:18.886405 | orchestrator | 2025-03-26 15:45:18.888562 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-03-26 15:45:19.521051 | orchestrator | Wednesday 26 March 2025 15:45:18 +0000 (0:00:00.110) 0:00:03.629 ******* 2025-03-26 15:45:19.521137 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:45:19.522199 | orchestrator | 2025-03-26 15:45:19.523448 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-03-26 15:45:19.524472 | orchestrator | Wednesday 26 March 2025 15:45:19 +0000 (0:00:00.635) 0:00:04.264 ******* 2025-03-26 15:45:19.644674 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:45:19.645568 | orchestrator | 2025-03-26 15:45:19.647191 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-03-26 15:45:19.648247 | orchestrator | 2025-03-26 15:45:19.650470 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-03-26 15:45:19.652108 | orchestrator | Wednesday 26 March 2025 15:45:19 +0000 (0:00:00.125) 0:00:04.390 ******* 2025-03-26 15:45:19.754955 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:45:19.755189 | orchestrator | 2025-03-26 15:45:19.756273 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-03-26 15:45:19.757297 | orchestrator | Wednesday 26 March 2025 15:45:19 +0000 (0:00:00.110) 0:00:04.500 ******* 2025-03-26 15:45:20.476089 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:45:20.476268 | orchestrator | 2025-03-26 15:45:20.477375 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-03-26 15:45:20.478122 | orchestrator | Wednesday 26 March 2025 15:45:20 +0000 (0:00:00.719) 0:00:05.220 ******* 2025-03-26 15:45:20.603222 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:45:20.606168 | orchestrator | 2025-03-26 15:45:20.710943 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-03-26 15:45:20.711010 | orchestrator | 2025-03-26 15:45:20.711027 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-03-26 15:45:20.711041 | orchestrator | Wednesday 26 March 2025 15:45:20 +0000 (0:00:00.124) 0:00:05.345 ******* 2025-03-26 15:45:20.711065 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:45:20.713608 | orchestrator | 2025-03-26 15:45:21.356998 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-03-26 15:45:21.357123 | orchestrator | Wednesday 26 March 2025 15:45:20 +0000 (0:00:00.107) 0:00:05.453 ******* 2025-03-26 15:45:21.357158 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:45:21.363716 | orchestrator | 2025-03-26 15:45:21.365286 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-03-26 15:45:21.366278 | orchestrator | Wednesday 26 March 2025 15:45:21 +0000 (0:00:00.649) 0:00:06.102 ******* 2025-03-26 15:45:21.389658 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:45:21.390857 | orchestrator | 2025-03-26 15:45:21.390889 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-26 15:45:21.391368 | orchestrator | 2025-03-26 15:45:21 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-26 15:45:21.391976 | orchestrator | 2025-03-26 15:45:21 | INFO  | Please wait and do not abort execution. 2025-03-26 15:45:21.393146 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-26 15:45:21.394169 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-26 15:45:21.394205 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-26 15:45:21.395308 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-26 15:45:21.396187 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-26 15:45:21.396528 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-26 15:45:21.397175 | orchestrator | 2025-03-26 15:45:21.397890 | orchestrator | Wednesday 26 March 2025 15:45:21 +0000 (0:00:00.032) 0:00:06.135 ******* 2025-03-26 15:45:21.398494 | orchestrator | =============================================================================== 2025-03-26 15:45:21.399126 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.59s 2025-03-26 15:45:21.399530 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.72s 2025-03-26 15:45:21.399955 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.65s 2025-03-26 15:45:22.001297 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-03-26 15:45:23.517367 | orchestrator | 2025-03-26 15:45:23 | INFO  | Task 7aa2ebe9-858f-47e9-b15f-171cd1d794f3 (wait-for-connection) was prepared for execution. 2025-03-26 15:45:26.971967 | orchestrator | 2025-03-26 15:45:23 | INFO  | It takes a moment until task 7aa2ebe9-858f-47e9-b15f-171cd1d794f3 (wait-for-connection) has been started and output is visible here. 2025-03-26 15:45:26.972116 | orchestrator | 2025-03-26 15:45:26.975458 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-03-26 15:45:26.977010 | orchestrator | 2025-03-26 15:45:26.977039 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-03-26 15:45:26.977061 | orchestrator | Wednesday 26 March 2025 15:45:26 +0000 (0:00:00.238) 0:00:00.238 ******* 2025-03-26 15:45:39.576727 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:45:39.577655 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:45:39.577689 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:45:39.577704 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:45:39.577718 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:45:39.577740 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:45:39.578616 | orchestrator | 2025-03-26 15:45:39.578983 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-26 15:45:39.579656 | orchestrator | 2025-03-26 15:45:39 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-26 15:45:39.579814 | orchestrator | 2025-03-26 15:45:39 | INFO  | Please wait and do not abort execution. 2025-03-26 15:45:39.580848 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-26 15:45:39.581546 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-26 15:45:39.581991 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-26 15:45:39.582536 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-26 15:45:39.583012 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-26 15:45:39.583528 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-26 15:45:39.584555 | orchestrator | 2025-03-26 15:45:39.585412 | orchestrator | Wednesday 26 March 2025 15:45:39 +0000 (0:00:12.603) 0:00:12.842 ******* 2025-03-26 15:45:39.586070 | orchestrator | =============================================================================== 2025-03-26 15:45:39.587286 | orchestrator | Wait until remote system is reachable ---------------------------------- 12.60s 2025-03-26 15:45:40.172354 | orchestrator | + osism apply hddtemp 2025-03-26 15:45:41.733149 | orchestrator | 2025-03-26 15:45:41 | INFO  | Task 83d501a6-73be-4541-8e52-28980839253f (hddtemp) was prepared for execution. 2025-03-26 15:45:45.181491 | orchestrator | 2025-03-26 15:45:41 | INFO  | It takes a moment until task 83d501a6-73be-4541-8e52-28980839253f (hddtemp) has been started and output is visible here. 2025-03-26 15:45:45.181632 | orchestrator | 2025-03-26 15:45:45.183116 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-03-26 15:45:45.184015 | orchestrator | 2025-03-26 15:45:45.186562 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-03-26 15:45:45.187465 | orchestrator | Wednesday 26 March 2025 15:45:45 +0000 (0:00:00.236) 0:00:00.236 ******* 2025-03-26 15:45:45.366534 | orchestrator | ok: [testbed-manager] 2025-03-26 15:45:45.451915 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:45:45.575243 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:45:45.656599 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:45:45.743836 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:45:46.008531 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:45:46.009202 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:45:46.013669 | orchestrator | 2025-03-26 15:45:46.014127 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-03-26 15:45:46.015442 | orchestrator | Wednesday 26 March 2025 15:45:45 +0000 (0:00:00.826) 0:00:01.063 ******* 2025-03-26 15:45:47.326852 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-26 15:45:47.327502 | orchestrator | 2025-03-26 15:45:47.328657 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-03-26 15:45:47.332506 | orchestrator | Wednesday 26 March 2025 15:45:47 +0000 (0:00:01.318) 0:00:02.381 ******* 2025-03-26 15:45:49.451509 | orchestrator | ok: [testbed-manager] 2025-03-26 15:45:49.452293 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:45:49.452350 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:45:49.454217 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:45:49.455126 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:45:49.456178 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:45:49.457642 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:45:49.458391 | orchestrator | 2025-03-26 15:45:49.459440 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-03-26 15:45:49.460096 | orchestrator | Wednesday 26 March 2025 15:45:49 +0000 (0:00:02.127) 0:00:04.509 ******* 2025-03-26 15:45:50.090721 | orchestrator | changed: [testbed-manager] 2025-03-26 15:45:50.176684 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:45:50.649760 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:45:50.650737 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:45:50.653557 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:45:50.653726 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:45:50.653755 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:45:50.654327 | orchestrator | 2025-03-26 15:45:50.655301 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-03-26 15:45:50.656180 | orchestrator | Wednesday 26 March 2025 15:45:50 +0000 (0:00:01.194) 0:00:05.703 ******* 2025-03-26 15:45:52.066686 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:45:52.067502 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:45:52.069192 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:45:52.069305 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:45:52.070257 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:45:52.071474 | orchestrator | ok: [testbed-manager] 2025-03-26 15:45:52.072093 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:45:52.073316 | orchestrator | 2025-03-26 15:45:52.073872 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-03-26 15:45:52.075228 | orchestrator | Wednesday 26 March 2025 15:45:52 +0000 (0:00:01.417) 0:00:07.120 ******* 2025-03-26 15:45:52.330013 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:45:52.415052 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:45:52.498468 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:45:52.580682 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:45:52.706849 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:45:52.707191 | orchestrator | changed: [testbed-manager] 2025-03-26 15:45:52.707223 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:45:52.708383 | orchestrator | 2025-03-26 15:45:52.709730 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-03-26 15:45:52.710092 | orchestrator | Wednesday 26 March 2025 15:45:52 +0000 (0:00:00.644) 0:00:07.764 ******* 2025-03-26 15:46:05.831175 | orchestrator | changed: [testbed-manager] 2025-03-26 15:46:05.832697 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:46:05.832740 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:46:05.832756 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:46:05.832771 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:46:05.832794 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:46:05.833819 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:46:05.834636 | orchestrator | 2025-03-26 15:46:05.835719 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-03-26 15:46:05.836220 | orchestrator | Wednesday 26 March 2025 15:46:05 +0000 (0:00:13.113) 0:00:20.878 ******* 2025-03-26 15:46:07.117776 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-26 15:46:07.118161 | orchestrator | 2025-03-26 15:46:07.118203 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-03-26 15:46:07.121268 | orchestrator | Wednesday 26 March 2025 15:46:07 +0000 (0:00:01.294) 0:00:22.172 ******* 2025-03-26 15:46:09.109934 | orchestrator | changed: [testbed-manager] 2025-03-26 15:46:09.110144 | orchestrator | changed: [testbed-node-0] 2025-03-26 15:46:09.110173 | orchestrator | changed: [testbed-node-2] 2025-03-26 15:46:09.111065 | orchestrator | changed: [testbed-node-1] 2025-03-26 15:46:09.111173 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:46:09.112561 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:46:09.113952 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:46:09.115049 | orchestrator | 2025-03-26 15:46:09.115235 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-26 15:46:09.115879 | orchestrator | 2025-03-26 15:46:09 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-26 15:46:09.116305 | orchestrator | 2025-03-26 15:46:09 | INFO  | Please wait and do not abort execution. 2025-03-26 15:46:09.117793 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-26 15:46:09.118390 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-26 15:46:09.119045 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-26 15:46:09.119719 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-26 15:46:09.120486 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-26 15:46:09.120989 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-26 15:46:09.121621 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-26 15:46:09.121817 | orchestrator | 2025-03-26 15:46:09.122823 | orchestrator | Wednesday 26 March 2025 15:46:09 +0000 (0:00:01.995) 0:00:24.168 ******* 2025-03-26 15:46:09.123313 | orchestrator | =============================================================================== 2025-03-26 15:46:09.123367 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.11s 2025-03-26 15:46:09.123566 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.13s 2025-03-26 15:46:09.123791 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 2.00s 2025-03-26 15:46:09.124459 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.42s 2025-03-26 15:46:09.124595 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.32s 2025-03-26 15:46:09.125322 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.29s 2025-03-26 15:46:09.126142 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.19s 2025-03-26 15:46:09.126926 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.83s 2025-03-26 15:46:09.127378 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.64s 2025-03-26 15:46:09.764107 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-03-26 15:46:11.130352 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-03-26 15:46:11.130641 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-03-26 15:46:11.130673 | orchestrator | + local max_attempts=60 2025-03-26 15:46:11.130689 | orchestrator | + local name=ceph-ansible 2025-03-26 15:46:11.130703 | orchestrator | + local attempt_num=1 2025-03-26 15:46:11.130725 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-03-26 15:46:11.168777 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-03-26 15:46:11.194283 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-03-26 15:46:11.194327 | orchestrator | + local max_attempts=60 2025-03-26 15:46:11.194342 | orchestrator | + local name=kolla-ansible 2025-03-26 15:46:11.194356 | orchestrator | + local attempt_num=1 2025-03-26 15:46:11.194371 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-03-26 15:46:11.194392 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-03-26 15:46:11.195080 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-03-26 15:46:11.195108 | orchestrator | + local max_attempts=60 2025-03-26 15:46:11.195124 | orchestrator | + local name=osism-ansible 2025-03-26 15:46:11.195140 | orchestrator | + local attempt_num=1 2025-03-26 15:46:11.195160 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-03-26 15:46:11.227869 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-03-26 15:46:11.647723 | orchestrator | + [[ true == \t\r\u\e ]] 2025-03-26 15:46:11.647820 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-03-26 15:46:11.647850 | orchestrator | ARA in ceph-ansible already disabled. 2025-03-26 15:46:12.050887 | orchestrator | ARA in kolla-ansible already disabled. 2025-03-26 15:46:12.417630 | orchestrator | ARA in osism-ansible already disabled. 2025-03-26 15:46:12.757839 | orchestrator | ARA in osism-kubernetes already disabled. 2025-03-26 15:46:12.758409 | orchestrator | + osism apply gather-facts 2025-03-26 15:46:14.381135 | orchestrator | 2025-03-26 15:46:14 | INFO  | Task f19f57d5-062f-4722-a538-fbb2d3358270 (gather-facts) was prepared for execution. 2025-03-26 15:46:17.786317 | orchestrator | 2025-03-26 15:46:14 | INFO  | It takes a moment until task f19f57d5-062f-4722-a538-fbb2d3358270 (gather-facts) has been started and output is visible here. 2025-03-26 15:46:17.786545 | orchestrator | 2025-03-26 15:46:17.786634 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-03-26 15:46:17.790827 | orchestrator | 2025-03-26 15:46:17.791120 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-03-26 15:46:17.791835 | orchestrator | Wednesday 26 March 2025 15:46:17 +0000 (0:00:00.186) 0:00:00.186 ******* 2025-03-26 15:46:23.146959 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:46:23.147664 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:46:23.152120 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:46:23.152384 | orchestrator | ok: [testbed-manager] 2025-03-26 15:46:23.152415 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:46:23.152456 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:46:23.152477 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:46:23.153542 | orchestrator | 2025-03-26 15:46:23.154145 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-03-26 15:46:23.154559 | orchestrator | 2025-03-26 15:46:23.155244 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-03-26 15:46:23.155840 | orchestrator | Wednesday 26 March 2025 15:46:23 +0000 (0:00:05.363) 0:00:05.549 ******* 2025-03-26 15:46:23.339069 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:46:23.417206 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:46:23.502398 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:46:23.586119 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:46:23.669985 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:46:23.702059 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:46:23.702490 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:46:23.703165 | orchestrator | 2025-03-26 15:46:23.704153 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-26 15:46:23.704705 | orchestrator | 2025-03-26 15:46:23 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-26 15:46:23.704802 | orchestrator | 2025-03-26 15:46:23 | INFO  | Please wait and do not abort execution. 2025-03-26 15:46:23.705811 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-26 15:46:23.706549 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-26 15:46:23.707541 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-26 15:46:23.708503 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-26 15:46:23.709468 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-26 15:46:23.710444 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-26 15:46:23.710819 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-03-26 15:46:23.711285 | orchestrator | 2025-03-26 15:46:23.711693 | orchestrator | Wednesday 26 March 2025 15:46:23 +0000 (0:00:00.556) 0:00:06.105 ******* 2025-03-26 15:46:23.712137 | orchestrator | =============================================================================== 2025-03-26 15:46:23.712563 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.36s 2025-03-26 15:46:23.712819 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.56s 2025-03-26 15:46:24.374974 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-03-26 15:46:24.390249 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-03-26 15:46:24.411519 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-03-26 15:46:24.430522 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-03-26 15:46:24.448066 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-03-26 15:46:24.469351 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-03-26 15:46:24.484276 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-03-26 15:46:24.497163 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-03-26 15:46:24.510069 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-03-26 15:46:24.523463 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-03-26 15:46:24.538335 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-03-26 15:46:24.549675 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-03-26 15:46:24.562786 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-03-26 15:46:24.579540 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-03-26 15:46:24.596593 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-03-26 15:46:24.610292 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-03-26 15:46:24.626231 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-03-26 15:46:24.643094 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-03-26 15:46:24.659674 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-03-26 15:46:24.685811 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-03-26 15:46:24.701346 | orchestrator | + [[ false == \t\r\u\e ]] 2025-03-26 15:46:24.831939 | orchestrator | changed 2025-03-26 15:46:24.894449 | 2025-03-26 15:46:24.894586 | TASK [Deploy services] 2025-03-26 15:46:25.026325 | orchestrator | skipping: Conditional result was False 2025-03-26 15:46:25.046711 | 2025-03-26 15:46:25.046860 | TASK [Deploy in a nutshell] 2025-03-26 15:46:25.706468 | orchestrator | + set -e 2025-03-26 15:46:25.706651 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-03-26 15:46:25.706679 | orchestrator | ++ export INTERACTIVE=false 2025-03-26 15:46:25.706697 | orchestrator | ++ INTERACTIVE=false 2025-03-26 15:46:25.706740 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-03-26 15:46:25.706758 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-03-26 15:46:25.706784 | orchestrator | + source /opt/manager-vars.sh 2025-03-26 15:46:25.707982 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-03-26 15:46:25.708014 | orchestrator | ++ NUMBER_OF_NODES=6 2025-03-26 15:46:25.708031 | orchestrator | 2025-03-26 15:46:25.708046 | orchestrator | # PULL IMAGES 2025-03-26 15:46:25.708060 | orchestrator | 2025-03-26 15:46:25.708073 | orchestrator | ++ export CEPH_VERSION=quincy 2025-03-26 15:46:25.708087 | orchestrator | ++ CEPH_VERSION=quincy 2025-03-26 15:46:25.708101 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-03-26 15:46:25.708115 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-03-26 15:46:25.708129 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-03-26 15:46:25.708144 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-03-26 15:46:25.708158 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-03-26 15:46:25.708172 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-03-26 15:46:25.708186 | orchestrator | ++ export ARA=false 2025-03-26 15:46:25.708200 | orchestrator | ++ ARA=false 2025-03-26 15:46:25.708213 | orchestrator | ++ export TEMPEST=false 2025-03-26 15:46:25.708227 | orchestrator | ++ TEMPEST=false 2025-03-26 15:46:25.708242 | orchestrator | ++ export IS_ZUUL=true 2025-03-26 15:46:25.708255 | orchestrator | ++ IS_ZUUL=true 2025-03-26 15:46:25.708269 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.171 2025-03-26 15:46:25.708283 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.171 2025-03-26 15:46:25.708297 | orchestrator | ++ export EXTERNAL_API=false 2025-03-26 15:46:25.708311 | orchestrator | ++ EXTERNAL_API=false 2025-03-26 15:46:25.708325 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-03-26 15:46:25.708338 | orchestrator | ++ IMAGE_USER=ubuntu 2025-03-26 15:46:25.708360 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-03-26 15:46:25.708374 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-03-26 15:46:25.708388 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-03-26 15:46:25.708401 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-03-26 15:46:25.708463 | orchestrator | + echo 2025-03-26 15:46:25.708482 | orchestrator | + echo '# PULL IMAGES' 2025-03-26 15:46:25.708496 | orchestrator | + echo 2025-03-26 15:46:25.708516 | orchestrator | ++ semver 8.1.0 7.0.0 2025-03-26 15:46:25.770357 | orchestrator | + [[ 1 -ge 0 ]] 2025-03-26 15:46:27.341454 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-03-26 15:46:27.341617 | orchestrator | 2025-03-26 15:46:27 | INFO  | Trying to run play pull-images in environment custom 2025-03-26 15:46:27.393460 | orchestrator | 2025-03-26 15:46:27 | INFO  | Task be344f69-6e33-40a5-82bf-479441d17abc (pull-images) was prepared for execution. 2025-03-26 15:46:30.688297 | orchestrator | 2025-03-26 15:46:27 | INFO  | It takes a moment until task be344f69-6e33-40a5-82bf-479441d17abc (pull-images) has been started and output is visible here. 2025-03-26 15:46:30.688455 | orchestrator | 2025-03-26 15:46:30.688622 | orchestrator | PLAY [Pull images] ************************************************************* 2025-03-26 15:46:30.689338 | orchestrator | 2025-03-26 15:46:30.690442 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-03-26 15:46:30.690941 | orchestrator | Wednesday 26 March 2025 15:46:30 +0000 (0:00:00.149) 0:00:00.149 ******* 2025-03-26 15:47:10.398162 | orchestrator | changed: [testbed-manager] 2025-03-26 15:48:05.808741 | orchestrator | 2025-03-26 15:48:05.808897 | orchestrator | TASK [Pull other images] ******************************************************* 2025-03-26 15:48:05.808919 | orchestrator | Wednesday 26 March 2025 15:47:10 +0000 (0:00:39.709) 0:00:39.859 ******* 2025-03-26 15:48:05.808952 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-03-26 15:48:05.809689 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-03-26 15:48:05.809717 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-03-26 15:48:05.809732 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-03-26 15:48:05.809762 | orchestrator | changed: [testbed-manager] => (item=common) 2025-03-26 15:48:05.809778 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-03-26 15:48:05.809799 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-03-26 15:48:05.809861 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-03-26 15:48:05.809911 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-03-26 15:48:05.810459 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-03-26 15:48:05.813636 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-03-26 15:48:05.814129 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-03-26 15:48:05.814154 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-03-26 15:48:05.814169 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-03-26 15:48:05.814183 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-03-26 15:48:05.814202 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-03-26 15:48:05.814388 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-03-26 15:48:05.814719 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-03-26 15:48:05.814993 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-03-26 15:48:05.817676 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-03-26 15:48:05.818127 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-03-26 15:48:05.818157 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-03-26 15:48:05.818174 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-03-26 15:48:05.818188 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-03-26 15:48:05.818206 | orchestrator | 2025-03-26 15:48:05.818336 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-26 15:48:05.818732 | orchestrator | 2025-03-26 15:48:05 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-26 15:48:05.818877 | orchestrator | 2025-03-26 15:48:05 | INFO  | Please wait and do not abort execution. 2025-03-26 15:48:05.819554 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-26 15:48:05.822523 | orchestrator | 2025-03-26 15:48:05.822606 | orchestrator | Wednesday 26 March 2025 15:48:05 +0000 (0:00:55.410) 0:01:35.270 ******* 2025-03-26 15:48:05.822627 | orchestrator | =============================================================================== 2025-03-26 15:48:05.822643 | orchestrator | Pull other images ------------------------------------------------------ 55.41s 2025-03-26 15:48:05.822662 | orchestrator | Pull keystone image ---------------------------------------------------- 39.71s 2025-03-26 15:48:08.143597 | orchestrator | 2025-03-26 15:48:08 | INFO  | Trying to run play wipe-partitions in environment custom 2025-03-26 15:48:08.207544 | orchestrator | 2025-03-26 15:48:08 | INFO  | Task 7130458e-3c45-4553-8b53-51f7c5daa050 (wipe-partitions) was prepared for execution. 2025-03-26 15:48:12.035609 | orchestrator | 2025-03-26 15:48:08 | INFO  | It takes a moment until task 7130458e-3c45-4553-8b53-51f7c5daa050 (wipe-partitions) has been started and output is visible here. 2025-03-26 15:48:12.035757 | orchestrator | 2025-03-26 15:48:12.036099 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-03-26 15:48:12.036214 | orchestrator | 2025-03-26 15:48:12.036729 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-03-26 15:48:12.037462 | orchestrator | Wednesday 26 March 2025 15:48:12 +0000 (0:00:00.148) 0:00:00.148 ******* 2025-03-26 15:48:12.678482 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:48:12.680385 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:48:12.680446 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:48:12.681071 | orchestrator | 2025-03-26 15:48:12.681133 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-03-26 15:48:12.681202 | orchestrator | Wednesday 26 March 2025 15:48:12 +0000 (0:00:00.645) 0:00:00.794 ******* 2025-03-26 15:48:12.859072 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:48:12.953924 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:48:12.954065 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:48:12.955947 | orchestrator | 2025-03-26 15:48:12.956359 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-03-26 15:48:12.956804 | orchestrator | Wednesday 26 March 2025 15:48:12 +0000 (0:00:00.273) 0:00:01.067 ******* 2025-03-26 15:48:13.862461 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:48:13.862715 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:48:13.862763 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:48:13.863076 | orchestrator | 2025-03-26 15:48:13.863328 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-03-26 15:48:13.863658 | orchestrator | Wednesday 26 March 2025 15:48:13 +0000 (0:00:00.905) 0:00:01.973 ******* 2025-03-26 15:48:14.061507 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:48:14.156261 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:48:14.159497 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:48:14.160164 | orchestrator | 2025-03-26 15:48:14.160208 | orchestrator | TASK [Check device availability] *********************************************** 2025-03-26 15:48:14.161342 | orchestrator | Wednesday 26 March 2025 15:48:14 +0000 (0:00:00.299) 0:00:02.272 ******* 2025-03-26 15:48:15.447808 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-03-26 15:48:15.448836 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-03-26 15:48:15.448871 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-03-26 15:48:15.448893 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-03-26 15:48:15.449085 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-03-26 15:48:15.449348 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-03-26 15:48:15.449688 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-03-26 15:48:15.453168 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-03-26 15:48:15.453400 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-03-26 15:48:15.453483 | orchestrator | 2025-03-26 15:48:15.453551 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-03-26 15:48:15.453731 | orchestrator | Wednesday 26 March 2025 15:48:15 +0000 (0:00:01.289) 0:00:03.562 ******* 2025-03-26 15:48:16.906906 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-03-26 15:48:16.907071 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-03-26 15:48:16.907096 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-03-26 15:48:16.907464 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-03-26 15:48:16.907502 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-03-26 15:48:16.907958 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-03-26 15:48:16.907991 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-03-26 15:48:16.908210 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-03-26 15:48:16.908514 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-03-26 15:48:16.908940 | orchestrator | 2025-03-26 15:48:16.909490 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-03-26 15:48:16.909694 | orchestrator | Wednesday 26 March 2025 15:48:16 +0000 (0:00:01.461) 0:00:05.023 ******* 2025-03-26 15:48:19.364879 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-03-26 15:48:19.368270 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-03-26 15:48:19.368343 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-03-26 15:48:19.369666 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-03-26 15:48:19.371322 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-03-26 15:48:19.373985 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-03-26 15:48:19.375278 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-03-26 15:48:19.376596 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-03-26 15:48:19.377930 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-03-26 15:48:19.379341 | orchestrator | 2025-03-26 15:48:19.381058 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-03-26 15:48:19.381781 | orchestrator | Wednesday 26 March 2025 15:48:19 +0000 (0:00:02.449) 0:00:07.473 ******* 2025-03-26 15:48:20.053761 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:48:20.054711 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:48:20.058183 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:48:20.059352 | orchestrator | 2025-03-26 15:48:20.059389 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-03-26 15:48:20.059457 | orchestrator | Wednesday 26 March 2025 15:48:20 +0000 (0:00:00.693) 0:00:08.167 ******* 2025-03-26 15:48:20.688588 | orchestrator | changed: [testbed-node-3] 2025-03-26 15:48:20.689341 | orchestrator | changed: [testbed-node-4] 2025-03-26 15:48:20.691097 | orchestrator | changed: [testbed-node-5] 2025-03-26 15:48:20.691643 | orchestrator | 2025-03-26 15:48:20.693056 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-26 15:48:20.693256 | orchestrator | 2025-03-26 15:48:20 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-26 15:48:20.693289 | orchestrator | 2025-03-26 15:48:20 | INFO  | Please wait and do not abort execution. 2025-03-26 15:48:20.694467 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-26 15:48:20.696131 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-26 15:48:20.697430 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-26 15:48:20.698285 | orchestrator | 2025-03-26 15:48:20.699091 | orchestrator | Wednesday 26 March 2025 15:48:20 +0000 (0:00:00.635) 0:00:08.802 ******* 2025-03-26 15:48:20.699553 | orchestrator | =============================================================================== 2025-03-26 15:48:20.700732 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.45s 2025-03-26 15:48:20.700905 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.46s 2025-03-26 15:48:20.700933 | orchestrator | Check device availability ----------------------------------------------- 1.29s 2025-03-26 15:48:20.701589 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.91s 2025-03-26 15:48:20.701697 | orchestrator | Reload udev rules ------------------------------------------------------- 0.69s 2025-03-26 15:48:20.702220 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.65s 2025-03-26 15:48:20.702739 | orchestrator | Request device events from the kernel ----------------------------------- 0.64s 2025-03-26 15:48:20.703238 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.30s 2025-03-26 15:48:20.704133 | orchestrator | Remove all rook related logical devices --------------------------------- 0.27s 2025-03-26 15:48:23.048098 | orchestrator | 2025-03-26 15:48:23 | INFO  | Task c8fb1386-9d5c-479c-9b5e-9ef3388aecd4 (facts) was prepared for execution. 2025-03-26 15:48:27.009088 | orchestrator | 2025-03-26 15:48:23 | INFO  | It takes a moment until task c8fb1386-9d5c-479c-9b5e-9ef3388aecd4 (facts) has been started and output is visible here. 2025-03-26 15:48:27.009212 | orchestrator | 2025-03-26 15:48:27.009568 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-03-26 15:48:27.010184 | orchestrator | 2025-03-26 15:48:27.011391 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-03-26 15:48:27.012493 | orchestrator | Wednesday 26 March 2025 15:48:27 +0000 (0:00:00.265) 0:00:00.265 ******* 2025-03-26 15:48:28.281728 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:48:28.282180 | orchestrator | ok: [testbed-manager] 2025-03-26 15:48:28.282577 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:48:28.284155 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:48:28.284620 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:48:28.284652 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:48:28.284748 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:48:28.285950 | orchestrator | 2025-03-26 15:48:28.287297 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-03-26 15:48:28.524053 | orchestrator | Wednesday 26 March 2025 15:48:28 +0000 (0:00:01.270) 0:00:01.536 ******* 2025-03-26 15:48:28.524169 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:48:28.664568 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:48:28.762824 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:48:28.873869 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:48:28.982305 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:48:29.913126 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:48:29.914134 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:48:29.915183 | orchestrator | 2025-03-26 15:48:29.916153 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-03-26 15:48:29.916852 | orchestrator | 2025-03-26 15:48:29.917880 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-03-26 15:48:29.918538 | orchestrator | Wednesday 26 March 2025 15:48:29 +0000 (0:00:01.633) 0:00:03.169 ******* 2025-03-26 15:48:35.739117 | orchestrator | ok: [testbed-node-1] 2025-03-26 15:48:35.741428 | orchestrator | ok: [testbed-node-2] 2025-03-26 15:48:35.741937 | orchestrator | ok: [testbed-node-0] 2025-03-26 15:48:35.742652 | orchestrator | ok: [testbed-manager] 2025-03-26 15:48:35.743754 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:48:35.747189 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:48:35.747630 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:48:35.748447 | orchestrator | 2025-03-26 15:48:35.749109 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-03-26 15:48:35.749874 | orchestrator | 2025-03-26 15:48:35.750684 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-03-26 15:48:35.751224 | orchestrator | Wednesday 26 March 2025 15:48:35 +0000 (0:00:05.829) 0:00:08.999 ******* 2025-03-26 15:48:36.124960 | orchestrator | skipping: [testbed-manager] 2025-03-26 15:48:36.200355 | orchestrator | skipping: [testbed-node-0] 2025-03-26 15:48:36.287466 | orchestrator | skipping: [testbed-node-1] 2025-03-26 15:48:36.380862 | orchestrator | skipping: [testbed-node-2] 2025-03-26 15:48:36.496567 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:48:36.541850 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:48:36.543106 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:48:36.544203 | orchestrator | 2025-03-26 15:48:36.545059 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-26 15:48:36.545634 | orchestrator | 2025-03-26 15:48:36 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-26 15:48:36.547059 | orchestrator | 2025-03-26 15:48:36 | INFO  | Please wait and do not abort execution. 2025-03-26 15:48:36.547092 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-26 15:48:36.548225 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-26 15:48:36.548951 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-26 15:48:36.549762 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-26 15:48:36.550433 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-26 15:48:36.552510 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-26 15:48:36.553375 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-26 15:48:36.554154 | orchestrator | 2025-03-26 15:48:36.554714 | orchestrator | Wednesday 26 March 2025 15:48:36 +0000 (0:00:00.803) 0:00:09.802 ******* 2025-03-26 15:48:36.555537 | orchestrator | =============================================================================== 2025-03-26 15:48:36.555732 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.83s 2025-03-26 15:48:36.556389 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.63s 2025-03-26 15:48:36.556817 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.27s 2025-03-26 15:48:36.557545 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.80s 2025-03-26 15:48:39.372079 | orchestrator | 2025-03-26 15:48:39 | INFO  | Task 7da29f33-c158-476a-b0d6-8d7a834d3874 (ceph-configure-lvm-volumes) was prepared for execution. 2025-03-26 15:48:39.373316 | orchestrator | 2025-03-26 15:48:39 | INFO  | It takes a moment until task 7da29f33-c158-476a-b0d6-8d7a834d3874 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-03-26 15:48:43.703141 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-03-26 15:48:44.405142 | orchestrator | 2025-03-26 15:48:44.406700 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-03-26 15:48:44.407073 | orchestrator | 2025-03-26 15:48:44.409878 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-03-26 15:48:44.412137 | orchestrator | Wednesday 26 March 2025 15:48:44 +0000 (0:00:00.558) 0:00:00.558 ******* 2025-03-26 15:48:44.683576 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-03-26 15:48:44.683777 | orchestrator | 2025-03-26 15:48:44.683808 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-03-26 15:48:44.684245 | orchestrator | Wednesday 26 March 2025 15:48:44 +0000 (0:00:00.274) 0:00:00.833 ******* 2025-03-26 15:48:44.961871 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:48:44.962608 | orchestrator | 2025-03-26 15:48:44.962641 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 15:48:44.962665 | orchestrator | Wednesday 26 March 2025 15:48:44 +0000 (0:00:00.283) 0:00:01.117 ******* 2025-03-26 15:48:45.562952 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-03-26 15:48:45.565788 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-03-26 15:48:45.567105 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-03-26 15:48:45.567141 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-03-26 15:48:45.567395 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-03-26 15:48:45.567441 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-03-26 15:48:45.568252 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-03-26 15:48:45.569219 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-03-26 15:48:45.570994 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-03-26 15:48:45.571962 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-03-26 15:48:45.572546 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-03-26 15:48:45.572571 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-03-26 15:48:45.572591 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-03-26 15:48:45.573003 | orchestrator | 2025-03-26 15:48:45.573139 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 15:48:45.573168 | orchestrator | Wednesday 26 March 2025 15:48:45 +0000 (0:00:00.601) 0:00:01.718 ******* 2025-03-26 15:48:45.909764 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:48:45.910092 | orchestrator | 2025-03-26 15:48:45.910129 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 15:48:45.910152 | orchestrator | Wednesday 26 March 2025 15:48:45 +0000 (0:00:00.346) 0:00:02.065 ******* 2025-03-26 15:48:46.407600 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:48:46.410641 | orchestrator | 2025-03-26 15:48:46.410686 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 15:48:46.411006 | orchestrator | Wednesday 26 March 2025 15:48:46 +0000 (0:00:00.496) 0:00:02.561 ******* 2025-03-26 15:48:46.837688 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:48:46.837941 | orchestrator | 2025-03-26 15:48:46.837976 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 15:48:46.838299 | orchestrator | Wednesday 26 March 2025 15:48:46 +0000 (0:00:00.430) 0:00:02.992 ******* 2025-03-26 15:48:47.185691 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:48:47.188021 | orchestrator | 2025-03-26 15:48:47.188758 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 15:48:47.188879 | orchestrator | Wednesday 26 March 2025 15:48:47 +0000 (0:00:00.347) 0:00:03.339 ******* 2025-03-26 15:48:47.547019 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:48:47.547861 | orchestrator | 2025-03-26 15:48:47.549638 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 15:48:47.549669 | orchestrator | Wednesday 26 March 2025 15:48:47 +0000 (0:00:00.364) 0:00:03.703 ******* 2025-03-26 15:48:47.924109 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:48:47.924474 | orchestrator | 2025-03-26 15:48:47.925268 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 15:48:47.925788 | orchestrator | Wednesday 26 March 2025 15:48:47 +0000 (0:00:00.372) 0:00:04.076 ******* 2025-03-26 15:48:48.188576 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:48:48.188731 | orchestrator | 2025-03-26 15:48:48.190650 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 15:48:48.190847 | orchestrator | Wednesday 26 March 2025 15:48:48 +0000 (0:00:00.264) 0:00:04.340 ******* 2025-03-26 15:48:48.391617 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:48:48.391745 | orchestrator | 2025-03-26 15:48:48.392109 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 15:48:48.392251 | orchestrator | Wednesday 26 March 2025 15:48:48 +0000 (0:00:00.207) 0:00:04.547 ******* 2025-03-26 15:48:49.240539 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e8a4f1c8-c63b-414f-8506-2e323c8cffff) 2025-03-26 15:48:49.243858 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e8a4f1c8-c63b-414f-8506-2e323c8cffff) 2025-03-26 15:48:49.244371 | orchestrator | 2025-03-26 15:48:49.244397 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 15:48:49.244444 | orchestrator | Wednesday 26 March 2025 15:48:49 +0000 (0:00:00.848) 0:00:05.396 ******* 2025-03-26 15:48:50.080598 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_538d5b73-1cd2-4d51-a71b-5a4f6cbc6cf8) 2025-03-26 15:48:50.085056 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_538d5b73-1cd2-4d51-a71b-5a4f6cbc6cf8) 2025-03-26 15:48:50.085573 | orchestrator | 2025-03-26 15:48:50.086313 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 15:48:50.086764 | orchestrator | Wednesday 26 March 2025 15:48:50 +0000 (0:00:00.837) 0:00:06.234 ******* 2025-03-26 15:48:50.594800 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_111b4eab-b6cc-4138-b8ce-a1d0ca0ede1d) 2025-03-26 15:48:50.596356 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_111b4eab-b6cc-4138-b8ce-a1d0ca0ede1d) 2025-03-26 15:48:50.600699 | orchestrator | 2025-03-26 15:48:50.600912 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 15:48:50.601442 | orchestrator | Wednesday 26 March 2025 15:48:50 +0000 (0:00:00.514) 0:00:06.748 ******* 2025-03-26 15:48:51.066790 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_a37eb2c8-f05e-43da-8c12-0830eb7c9d71) 2025-03-26 15:48:51.067251 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_a37eb2c8-f05e-43da-8c12-0830eb7c9d71) 2025-03-26 15:48:51.067332 | orchestrator | 2025-03-26 15:48:51.069076 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 15:48:51.070685 | orchestrator | Wednesday 26 March 2025 15:48:51 +0000 (0:00:00.473) 0:00:07.222 ******* 2025-03-26 15:48:51.425905 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-03-26 15:48:51.426753 | orchestrator | 2025-03-26 15:48:51.427201 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 15:48:51.429486 | orchestrator | Wednesday 26 March 2025 15:48:51 +0000 (0:00:00.358) 0:00:07.581 ******* 2025-03-26 15:48:51.876822 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-03-26 15:48:51.879827 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-03-26 15:48:51.879869 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-03-26 15:48:51.880725 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-03-26 15:48:51.880752 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-03-26 15:48:51.880770 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-03-26 15:48:51.880792 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-03-26 15:48:51.881174 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-03-26 15:48:51.881585 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-03-26 15:48:51.881984 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-03-26 15:48:51.882820 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-03-26 15:48:51.884803 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-03-26 15:48:52.092658 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-03-26 15:48:52.093173 | orchestrator | 2025-03-26 15:48:52.093205 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 15:48:52.093233 | orchestrator | Wednesday 26 March 2025 15:48:51 +0000 (0:00:00.449) 0:00:08.030 ******* 2025-03-26 15:48:52.093259 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:48:52.095319 | orchestrator | 2025-03-26 15:48:52.096858 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 15:48:52.097272 | orchestrator | Wednesday 26 March 2025 15:48:52 +0000 (0:00:00.217) 0:00:08.248 ******* 2025-03-26 15:48:52.348828 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:48:52.350351 | orchestrator | 2025-03-26 15:48:52.350386 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 15:48:52.351365 | orchestrator | Wednesday 26 March 2025 15:48:52 +0000 (0:00:00.253) 0:00:08.501 ******* 2025-03-26 15:48:52.642176 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:48:52.642998 | orchestrator | 2025-03-26 15:48:52.643742 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 15:48:52.644488 | orchestrator | Wednesday 26 March 2025 15:48:52 +0000 (0:00:00.295) 0:00:08.797 ******* 2025-03-26 15:48:52.863452 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:48:52.864318 | orchestrator | 2025-03-26 15:48:52.865496 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 15:48:52.866081 | orchestrator | Wednesday 26 March 2025 15:48:52 +0000 (0:00:00.216) 0:00:09.014 ******* 2025-03-26 15:48:53.536608 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:48:53.536808 | orchestrator | 2025-03-26 15:48:53.538103 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 15:48:53.539558 | orchestrator | Wednesday 26 March 2025 15:48:53 +0000 (0:00:00.675) 0:00:09.690 ******* 2025-03-26 15:48:53.758200 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:48:53.759255 | orchestrator | 2025-03-26 15:48:53.760572 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 15:48:53.761206 | orchestrator | Wednesday 26 March 2025 15:48:53 +0000 (0:00:00.225) 0:00:09.915 ******* 2025-03-26 15:48:54.008789 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:48:54.009066 | orchestrator | 2025-03-26 15:48:54.009101 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 15:48:54.009299 | orchestrator | Wednesday 26 March 2025 15:48:54 +0000 (0:00:00.248) 0:00:10.164 ******* 2025-03-26 15:48:54.237337 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:48:54.237570 | orchestrator | 2025-03-26 15:48:54.238063 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 15:48:54.238308 | orchestrator | Wednesday 26 March 2025 15:48:54 +0000 (0:00:00.225) 0:00:10.389 ******* 2025-03-26 15:48:55.060075 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-03-26 15:48:55.062588 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-03-26 15:48:55.062661 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-03-26 15:48:55.062682 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-03-26 15:48:55.063482 | orchestrator | 2025-03-26 15:48:55.063561 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 15:48:55.064245 | orchestrator | Wednesday 26 March 2025 15:48:55 +0000 (0:00:00.824) 0:00:11.213 ******* 2025-03-26 15:48:55.370708 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:48:55.373149 | orchestrator | 2025-03-26 15:48:55.373250 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 15:48:55.374383 | orchestrator | Wednesday 26 March 2025 15:48:55 +0000 (0:00:00.313) 0:00:11.526 ******* 2025-03-26 15:48:55.620372 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:48:55.620500 | orchestrator | 2025-03-26 15:48:55.621163 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 15:48:55.626220 | orchestrator | Wednesday 26 March 2025 15:48:55 +0000 (0:00:00.248) 0:00:11.774 ******* 2025-03-26 15:48:55.877671 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:48:55.879180 | orchestrator | 2025-03-26 15:48:55.879517 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 15:48:55.879941 | orchestrator | Wednesday 26 March 2025 15:48:55 +0000 (0:00:00.257) 0:00:12.032 ******* 2025-03-26 15:48:56.115714 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:48:56.116316 | orchestrator | 2025-03-26 15:48:56.116530 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-03-26 15:48:56.116760 | orchestrator | Wednesday 26 March 2025 15:48:56 +0000 (0:00:00.240) 0:00:12.272 ******* 2025-03-26 15:48:56.339708 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-03-26 15:48:56.339853 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-03-26 15:48:56.340217 | orchestrator | 2025-03-26 15:48:56.345072 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-03-26 15:48:56.714467 | orchestrator | Wednesday 26 March 2025 15:48:56 +0000 (0:00:00.219) 0:00:12.492 ******* 2025-03-26 15:48:56.714532 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:48:56.714689 | orchestrator | 2025-03-26 15:48:56.715571 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-03-26 15:48:56.715933 | orchestrator | Wednesday 26 March 2025 15:48:56 +0000 (0:00:00.378) 0:00:12.871 ******* 2025-03-26 15:48:56.884482 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:48:56.890391 | orchestrator | 2025-03-26 15:48:56.892575 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-03-26 15:48:56.895675 | orchestrator | Wednesday 26 March 2025 15:48:56 +0000 (0:00:00.165) 0:00:13.037 ******* 2025-03-26 15:48:57.073766 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:48:57.075532 | orchestrator | 2025-03-26 15:48:57.077155 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-03-26 15:48:57.078613 | orchestrator | Wednesday 26 March 2025 15:48:57 +0000 (0:00:00.192) 0:00:13.229 ******* 2025-03-26 15:48:57.224734 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:48:57.225728 | orchestrator | 2025-03-26 15:48:57.226484 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-03-26 15:48:57.227062 | orchestrator | Wednesday 26 March 2025 15:48:57 +0000 (0:00:00.150) 0:00:13.380 ******* 2025-03-26 15:48:57.425166 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5f4a1373-2d27-5995-abd5-5c6678505b20'}}) 2025-03-26 15:48:57.427708 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'dd5d4016-0a76-5f2c-8b69-aeca47fee476'}}) 2025-03-26 15:48:57.430169 | orchestrator | 2025-03-26 15:48:57.431070 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-03-26 15:48:57.432064 | orchestrator | Wednesday 26 March 2025 15:48:57 +0000 (0:00:00.199) 0:00:13.580 ******* 2025-03-26 15:48:57.605615 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5f4a1373-2d27-5995-abd5-5c6678505b20'}})  2025-03-26 15:48:57.607767 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'dd5d4016-0a76-5f2c-8b69-aeca47fee476'}})  2025-03-26 15:48:57.608689 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:48:57.608721 | orchestrator | 2025-03-26 15:48:57.609592 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-03-26 15:48:57.610093 | orchestrator | Wednesday 26 March 2025 15:48:57 +0000 (0:00:00.181) 0:00:13.761 ******* 2025-03-26 15:48:57.816530 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5f4a1373-2d27-5995-abd5-5c6678505b20'}})  2025-03-26 15:48:57.819291 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'dd5d4016-0a76-5f2c-8b69-aeca47fee476'}})  2025-03-26 15:48:57.819974 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:48:57.820000 | orchestrator | 2025-03-26 15:48:57.820021 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-03-26 15:48:57.823349 | orchestrator | Wednesday 26 March 2025 15:48:57 +0000 (0:00:00.211) 0:00:13.973 ******* 2025-03-26 15:48:58.037270 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5f4a1373-2d27-5995-abd5-5c6678505b20'}})  2025-03-26 15:48:58.038110 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'dd5d4016-0a76-5f2c-8b69-aeca47fee476'}})  2025-03-26 15:48:58.039479 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:48:58.040348 | orchestrator | 2025-03-26 15:48:58.040996 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-03-26 15:48:58.042930 | orchestrator | Wednesday 26 March 2025 15:48:58 +0000 (0:00:00.220) 0:00:14.193 ******* 2025-03-26 15:48:58.200504 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:48:58.201757 | orchestrator | 2025-03-26 15:48:58.205098 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-03-26 15:48:58.209862 | orchestrator | Wednesday 26 March 2025 15:48:58 +0000 (0:00:00.161) 0:00:14.355 ******* 2025-03-26 15:48:58.392441 | orchestrator | ok: [testbed-node-3] 2025-03-26 15:48:58.393057 | orchestrator | 2025-03-26 15:48:58.393397 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-03-26 15:48:58.393453 | orchestrator | Wednesday 26 March 2025 15:48:58 +0000 (0:00:00.192) 0:00:14.548 ******* 2025-03-26 15:48:58.544676 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:48:58.544836 | orchestrator | 2025-03-26 15:48:58.545535 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-03-26 15:48:58.546584 | orchestrator | Wednesday 26 March 2025 15:48:58 +0000 (0:00:00.151) 0:00:14.699 ******* 2025-03-26 15:48:58.709013 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:48:58.710401 | orchestrator | 2025-03-26 15:48:58.711559 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-03-26 15:48:58.712982 | orchestrator | Wednesday 26 March 2025 15:48:58 +0000 (0:00:00.163) 0:00:14.863 ******* 2025-03-26 15:48:59.107741 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:48:59.109585 | orchestrator | 2025-03-26 15:48:59.110499 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-03-26 15:48:59.111254 | orchestrator | Wednesday 26 March 2025 15:48:59 +0000 (0:00:00.400) 0:00:15.264 ******* 2025-03-26 15:48:59.260642 | orchestrator | ok: [testbed-node-3] => { 2025-03-26 15:48:59.262628 | orchestrator |  "ceph_osd_devices": { 2025-03-26 15:48:59.264569 | orchestrator |  "sdb": { 2025-03-26 15:48:59.265714 | orchestrator |  "osd_lvm_uuid": "5f4a1373-2d27-5995-abd5-5c6678505b20" 2025-03-26 15:48:59.268387 | orchestrator |  }, 2025-03-26 15:48:59.268737 | orchestrator |  "sdc": { 2025-03-26 15:48:59.269852 | orchestrator |  "osd_lvm_uuid": "dd5d4016-0a76-5f2c-8b69-aeca47fee476" 2025-03-26 15:48:59.271038 | orchestrator |  } 2025-03-26 15:48:59.271784 | orchestrator |  } 2025-03-26 15:48:59.272454 | orchestrator | } 2025-03-26 15:48:59.273249 | orchestrator | 2025-03-26 15:48:59.273609 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-03-26 15:48:59.274231 | orchestrator | Wednesday 26 March 2025 15:48:59 +0000 (0:00:00.153) 0:00:15.417 ******* 2025-03-26 15:48:59.420905 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:48:59.421729 | orchestrator | 2025-03-26 15:48:59.425350 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-03-26 15:48:59.426464 | orchestrator | Wednesday 26 March 2025 15:48:59 +0000 (0:00:00.157) 0:00:15.574 ******* 2025-03-26 15:48:59.567621 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:48:59.568933 | orchestrator | 2025-03-26 15:48:59.571006 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-03-26 15:48:59.572385 | orchestrator | Wednesday 26 March 2025 15:48:59 +0000 (0:00:00.148) 0:00:15.723 ******* 2025-03-26 15:48:59.722376 | orchestrator | skipping: [testbed-node-3] 2025-03-26 15:48:59.724929 | orchestrator | 2025-03-26 15:48:59.729495 | orchestrator | TASK [Print configuration data] ************************************************ 2025-03-26 15:49:00.050078 | orchestrator | Wednesday 26 March 2025 15:48:59 +0000 (0:00:00.154) 0:00:15.877 ******* 2025-03-26 15:49:00.050224 | orchestrator | changed: [testbed-node-3] => { 2025-03-26 15:49:00.051032 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-03-26 15:49:00.052495 | orchestrator |  "ceph_osd_devices": { 2025-03-26 15:49:00.052809 | orchestrator |  "sdb": { 2025-03-26 15:49:00.054789 | orchestrator |  "osd_lvm_uuid": "5f4a1373-2d27-5995-abd5-5c6678505b20" 2025-03-26 15:49:00.057623 | orchestrator |  }, 2025-03-26 15:49:00.057720 | orchestrator |  "sdc": { 2025-03-26 15:49:00.058456 | orchestrator |  "osd_lvm_uuid": "dd5d4016-0a76-5f2c-8b69-aeca47fee476" 2025-03-26 15:49:00.058924 | orchestrator |  } 2025-03-26 15:49:00.059703 | orchestrator |  }, 2025-03-26 15:49:00.059975 | orchestrator |  "lvm_volumes": [ 2025-03-26 15:49:00.060420 | orchestrator |  { 2025-03-26 15:49:00.061223 | orchestrator |  "data": "osd-block-5f4a1373-2d27-5995-abd5-5c6678505b20", 2025-03-26 15:49:00.061893 | orchestrator |  "data_vg": "ceph-5f4a1373-2d27-5995-abd5-5c6678505b20" 2025-03-26 15:49:00.062143 | orchestrator |  }, 2025-03-26 15:49:00.062601 | orchestrator |  { 2025-03-26 15:49:00.063340 | orchestrator |  "data": "osd-block-dd5d4016-0a76-5f2c-8b69-aeca47fee476", 2025-03-26 15:49:00.063463 | orchestrator |  "data_vg": "ceph-dd5d4016-0a76-5f2c-8b69-aeca47fee476" 2025-03-26 15:49:00.063763 | orchestrator |  } 2025-03-26 15:49:00.064468 | orchestrator |  ] 2025-03-26 15:49:00.064947 | orchestrator |  } 2025-03-26 15:49:00.065794 | orchestrator | } 2025-03-26 15:49:00.065986 | orchestrator | 2025-03-26 15:49:00.066062 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-03-26 15:49:00.066555 | orchestrator | Wednesday 26 March 2025 15:49:00 +0000 (0:00:00.328) 0:00:16.206 ******* 2025-03-26 15:49:02.544552 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-03-26 15:49:02.545899 | orchestrator | 2025-03-26 15:49:02.546740 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-03-26 15:49:02.547454 | orchestrator | 2025-03-26 15:49:02.547483 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-03-26 15:49:02.547502 | orchestrator | Wednesday 26 March 2025 15:49:02 +0000 (0:00:02.495) 0:00:18.701 ******* 2025-03-26 15:49:02.858297 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-03-26 15:49:02.863531 | orchestrator | 2025-03-26 15:49:02.863621 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-03-26 15:49:02.864924 | orchestrator | Wednesday 26 March 2025 15:49:02 +0000 (0:00:00.308) 0:00:19.010 ******* 2025-03-26 15:49:03.122201 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:49:03.124852 | orchestrator | 2025-03-26 15:49:03.127281 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 15:49:03.128046 | orchestrator | Wednesday 26 March 2025 15:49:03 +0000 (0:00:00.264) 0:00:19.275 ******* 2025-03-26 15:49:03.551327 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-03-26 15:49:03.551807 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-03-26 15:49:03.552051 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-03-26 15:49:03.552342 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-03-26 15:49:03.553276 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-03-26 15:49:03.554228 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-03-26 15:49:03.556183 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-03-26 15:49:03.556599 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-03-26 15:49:03.559061 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-03-26 15:49:03.560393 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-03-26 15:49:03.561741 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-03-26 15:49:03.562921 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-03-26 15:49:03.563650 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-03-26 15:49:03.565895 | orchestrator | 2025-03-26 15:49:03.566479 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 15:49:03.566515 | orchestrator | Wednesday 26 March 2025 15:49:03 +0000 (0:00:00.429) 0:00:19.705 ******* 2025-03-26 15:49:03.758255 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:49:03.759097 | orchestrator | 2025-03-26 15:49:03.759706 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 15:49:03.760826 | orchestrator | Wednesday 26 March 2025 15:49:03 +0000 (0:00:00.208) 0:00:19.914 ******* 2025-03-26 15:49:03.993731 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:49:03.994883 | orchestrator | 2025-03-26 15:49:03.995602 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 15:49:03.996037 | orchestrator | Wednesday 26 March 2025 15:49:03 +0000 (0:00:00.234) 0:00:20.148 ******* 2025-03-26 15:49:04.223562 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:49:04.223819 | orchestrator | 2025-03-26 15:49:04.224692 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 15:49:04.224958 | orchestrator | Wednesday 26 March 2025 15:49:04 +0000 (0:00:00.231) 0:00:20.379 ******* 2025-03-26 15:49:04.917630 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:49:04.920219 | orchestrator | 2025-03-26 15:49:05.173765 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 15:49:05.173860 | orchestrator | Wednesday 26 March 2025 15:49:04 +0000 (0:00:00.689) 0:00:21.069 ******* 2025-03-26 15:49:05.173889 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:49:05.175151 | orchestrator | 2025-03-26 15:49:05.181324 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 15:49:05.181584 | orchestrator | Wednesday 26 March 2025 15:49:05 +0000 (0:00:00.258) 0:00:21.328 ******* 2025-03-26 15:49:05.432031 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:49:05.432553 | orchestrator | 2025-03-26 15:49:05.433672 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 15:49:05.434579 | orchestrator | Wednesday 26 March 2025 15:49:05 +0000 (0:00:00.257) 0:00:21.586 ******* 2025-03-26 15:49:05.641194 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:49:05.642542 | orchestrator | 2025-03-26 15:49:05.643880 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 15:49:05.648225 | orchestrator | Wednesday 26 March 2025 15:49:05 +0000 (0:00:00.209) 0:00:21.795 ******* 2025-03-26 15:49:05.847696 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:49:05.848587 | orchestrator | 2025-03-26 15:49:05.850098 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 15:49:05.850798 | orchestrator | Wednesday 26 March 2025 15:49:05 +0000 (0:00:00.207) 0:00:22.003 ******* 2025-03-26 15:49:06.357827 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_16996a57-a85a-47ec-96d8-6a9835d4cef6) 2025-03-26 15:49:06.360680 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_16996a57-a85a-47ec-96d8-6a9835d4cef6) 2025-03-26 15:49:06.362219 | orchestrator | 2025-03-26 15:49:06.830330 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 15:49:06.830469 | orchestrator | Wednesday 26 March 2025 15:49:06 +0000 (0:00:00.510) 0:00:22.513 ******* 2025-03-26 15:49:06.830501 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_461b6804-a72b-4d73-873c-350a148be214) 2025-03-26 15:49:06.832165 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_461b6804-a72b-4d73-873c-350a148be214) 2025-03-26 15:49:06.835561 | orchestrator | 2025-03-26 15:49:06.837696 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 15:49:06.840255 | orchestrator | Wednesday 26 March 2025 15:49:06 +0000 (0:00:00.472) 0:00:22.985 ******* 2025-03-26 15:49:07.297934 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1f0ffd5a-0515-4e82-9e1a-64c4889ae37d) 2025-03-26 15:49:07.299083 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1f0ffd5a-0515-4e82-9e1a-64c4889ae37d) 2025-03-26 15:49:07.304813 | orchestrator | 2025-03-26 15:49:08.012479 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 15:49:08.012578 | orchestrator | Wednesday 26 March 2025 15:49:07 +0000 (0:00:00.466) 0:00:23.452 ******* 2025-03-26 15:49:08.012608 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_3084e68a-ebad-4437-bd94-82607113cb35) 2025-03-26 15:49:08.013518 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_3084e68a-ebad-4437-bd94-82607113cb35) 2025-03-26 15:49:08.014299 | orchestrator | 2025-03-26 15:49:08.014944 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 15:49:08.015384 | orchestrator | Wednesday 26 March 2025 15:49:08 +0000 (0:00:00.714) 0:00:24.166 ******* 2025-03-26 15:49:08.620220 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-03-26 15:49:08.621653 | orchestrator | 2025-03-26 15:49:08.623850 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 15:49:08.625014 | orchestrator | Wednesday 26 March 2025 15:49:08 +0000 (0:00:00.610) 0:00:24.776 ******* 2025-03-26 15:49:09.340216 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-03-26 15:49:09.345910 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-03-26 15:49:09.346980 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-03-26 15:49:09.348879 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-03-26 15:49:09.348907 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-03-26 15:49:09.348927 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-03-26 15:49:09.349358 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-03-26 15:49:09.350292 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-03-26 15:49:09.350625 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-03-26 15:49:09.351898 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-03-26 15:49:09.352834 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-03-26 15:49:09.353544 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-03-26 15:49:09.355114 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-03-26 15:49:09.356060 | orchestrator | 2025-03-26 15:49:09.357720 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 15:49:09.357868 | orchestrator | Wednesday 26 March 2025 15:49:09 +0000 (0:00:00.718) 0:00:25.494 ******* 2025-03-26 15:49:09.570736 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:49:09.570919 | orchestrator | 2025-03-26 15:49:09.571915 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 15:49:09.572335 | orchestrator | Wednesday 26 March 2025 15:49:09 +0000 (0:00:00.230) 0:00:25.725 ******* 2025-03-26 15:49:09.768079 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:49:09.768837 | orchestrator | 2025-03-26 15:49:09.769856 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 15:49:09.770663 | orchestrator | Wednesday 26 March 2025 15:49:09 +0000 (0:00:00.199) 0:00:25.924 ******* 2025-03-26 15:49:09.975246 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:49:09.975906 | orchestrator | 2025-03-26 15:49:09.976535 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 15:49:09.977149 | orchestrator | Wednesday 26 March 2025 15:49:09 +0000 (0:00:00.206) 0:00:26.130 ******* 2025-03-26 15:49:10.202286 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:49:10.202556 | orchestrator | 2025-03-26 15:49:10.202934 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 15:49:10.203040 | orchestrator | Wednesday 26 March 2025 15:49:10 +0000 (0:00:00.227) 0:00:26.358 ******* 2025-03-26 15:49:10.416382 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:49:10.418533 | orchestrator | 2025-03-26 15:49:10.420894 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 15:49:10.650540 | orchestrator | Wednesday 26 March 2025 15:49:10 +0000 (0:00:00.214) 0:00:26.572 ******* 2025-03-26 15:49:10.650591 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:49:10.651370 | orchestrator | 2025-03-26 15:49:10.652665 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 15:49:10.653620 | orchestrator | Wednesday 26 March 2025 15:49:10 +0000 (0:00:00.232) 0:00:26.805 ******* 2025-03-26 15:49:10.902755 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:49:10.904035 | orchestrator | 2025-03-26 15:49:10.904897 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 15:49:10.906170 | orchestrator | Wednesday 26 March 2025 15:49:10 +0000 (0:00:00.253) 0:00:27.058 ******* 2025-03-26 15:49:11.102250 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:49:11.103515 | orchestrator | 2025-03-26 15:49:11.104398 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 15:49:11.105849 | orchestrator | Wednesday 26 March 2025 15:49:11 +0000 (0:00:00.197) 0:00:27.256 ******* 2025-03-26 15:49:12.238803 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-03-26 15:49:12.240248 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-03-26 15:49:12.243037 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-03-26 15:49:12.243234 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-03-26 15:49:12.243261 | orchestrator | 2025-03-26 15:49:12.244329 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 15:49:12.245120 | orchestrator | Wednesday 26 March 2025 15:49:12 +0000 (0:00:01.136) 0:00:28.393 ******* 2025-03-26 15:49:12.471358 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:49:12.471800 | orchestrator | 2025-03-26 15:49:12.472524 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 15:49:12.473001 | orchestrator | Wednesday 26 March 2025 15:49:12 +0000 (0:00:00.233) 0:00:28.626 ******* 2025-03-26 15:49:12.706951 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:49:12.708458 | orchestrator | 2025-03-26 15:49:12.709256 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 15:49:12.710193 | orchestrator | Wednesday 26 March 2025 15:49:12 +0000 (0:00:00.233) 0:00:28.860 ******* 2025-03-26 15:49:12.929175 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:49:12.930216 | orchestrator | 2025-03-26 15:49:12.931581 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 15:49:12.933012 | orchestrator | Wednesday 26 March 2025 15:49:12 +0000 (0:00:00.224) 0:00:29.084 ******* 2025-03-26 15:49:13.151993 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:49:13.153327 | orchestrator | 2025-03-26 15:49:13.156321 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-03-26 15:49:13.356778 | orchestrator | Wednesday 26 March 2025 15:49:13 +0000 (0:00:00.222) 0:00:29.307 ******* 2025-03-26 15:49:13.356848 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-03-26 15:49:13.357366 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-03-26 15:49:13.358466 | orchestrator | 2025-03-26 15:49:13.361159 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-03-26 15:49:13.520214 | orchestrator | Wednesday 26 March 2025 15:49:13 +0000 (0:00:00.205) 0:00:29.512 ******* 2025-03-26 15:49:13.520298 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:49:13.521158 | orchestrator | 2025-03-26 15:49:13.521186 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-03-26 15:49:13.522173 | orchestrator | Wednesday 26 March 2025 15:49:13 +0000 (0:00:00.163) 0:00:29.675 ******* 2025-03-26 15:49:13.700815 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:49:13.702103 | orchestrator | 2025-03-26 15:49:13.702132 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-03-26 15:49:13.702974 | orchestrator | Wednesday 26 March 2025 15:49:13 +0000 (0:00:00.177) 0:00:29.852 ******* 2025-03-26 15:49:13.859272 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:49:13.860716 | orchestrator | 2025-03-26 15:49:13.861776 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-03-26 15:49:13.862789 | orchestrator | Wednesday 26 March 2025 15:49:13 +0000 (0:00:00.161) 0:00:30.014 ******* 2025-03-26 15:49:14.016906 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:49:14.017921 | orchestrator | 2025-03-26 15:49:14.019267 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-03-26 15:49:14.020022 | orchestrator | Wednesday 26 March 2025 15:49:14 +0000 (0:00:00.157) 0:00:30.172 ******* 2025-03-26 15:49:14.217071 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3792eb61-f724-5898-a357-e4730d4e9a9e'}}) 2025-03-26 15:49:14.217984 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e5825628-1db3-5928-b5a4-d100b751e871'}}) 2025-03-26 15:49:14.219385 | orchestrator | 2025-03-26 15:49:14.221474 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-03-26 15:49:14.222192 | orchestrator | Wednesday 26 March 2025 15:49:14 +0000 (0:00:00.199) 0:00:30.371 ******* 2025-03-26 15:49:14.617860 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3792eb61-f724-5898-a357-e4730d4e9a9e'}})  2025-03-26 15:49:14.618101 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e5825628-1db3-5928-b5a4-d100b751e871'}})  2025-03-26 15:49:14.619005 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:49:14.619537 | orchestrator | 2025-03-26 15:49:14.621280 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-03-26 15:49:14.818623 | orchestrator | Wednesday 26 March 2025 15:49:14 +0000 (0:00:00.400) 0:00:30.771 ******* 2025-03-26 15:49:14.818700 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3792eb61-f724-5898-a357-e4730d4e9a9e'}})  2025-03-26 15:49:14.819380 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e5825628-1db3-5928-b5a4-d100b751e871'}})  2025-03-26 15:49:14.819437 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:49:14.819822 | orchestrator | 2025-03-26 15:49:14.820500 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-03-26 15:49:14.820859 | orchestrator | Wednesday 26 March 2025 15:49:14 +0000 (0:00:00.202) 0:00:30.974 ******* 2025-03-26 15:49:15.013307 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3792eb61-f724-5898-a357-e4730d4e9a9e'}})  2025-03-26 15:49:15.013508 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e5825628-1db3-5928-b5a4-d100b751e871'}})  2025-03-26 15:49:15.014061 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:49:15.014314 | orchestrator | 2025-03-26 15:49:15.014812 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-03-26 15:49:15.015069 | orchestrator | Wednesday 26 March 2025 15:49:15 +0000 (0:00:00.194) 0:00:31.169 ******* 2025-03-26 15:49:15.186588 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:49:15.187366 | orchestrator | 2025-03-26 15:49:15.188140 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-03-26 15:49:15.190691 | orchestrator | Wednesday 26 March 2025 15:49:15 +0000 (0:00:00.170) 0:00:31.339 ******* 2025-03-26 15:49:15.362842 | orchestrator | ok: [testbed-node-4] 2025-03-26 15:49:15.362964 | orchestrator | 2025-03-26 15:49:15.363545 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-03-26 15:49:15.363849 | orchestrator | Wednesday 26 March 2025 15:49:15 +0000 (0:00:00.178) 0:00:31.518 ******* 2025-03-26 15:49:15.535852 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:49:15.536055 | orchestrator | 2025-03-26 15:49:15.536089 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-03-26 15:49:15.537373 | orchestrator | Wednesday 26 March 2025 15:49:15 +0000 (0:00:00.173) 0:00:31.692 ******* 2025-03-26 15:49:15.717991 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:49:15.718572 | orchestrator | 2025-03-26 15:49:15.719200 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-03-26 15:49:15.721846 | orchestrator | Wednesday 26 March 2025 15:49:15 +0000 (0:00:00.180) 0:00:31.872 ******* 2025-03-26 15:49:15.849909 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:49:15.850673 | orchestrator | 2025-03-26 15:49:15.851833 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-03-26 15:49:15.852867 | orchestrator | Wednesday 26 March 2025 15:49:15 +0000 (0:00:00.132) 0:00:32.005 ******* 2025-03-26 15:49:16.021225 | orchestrator | ok: [testbed-node-4] => { 2025-03-26 15:49:16.021825 | orchestrator |  "ceph_osd_devices": { 2025-03-26 15:49:16.023301 | orchestrator |  "sdb": { 2025-03-26 15:49:16.024463 | orchestrator |  "osd_lvm_uuid": "3792eb61-f724-5898-a357-e4730d4e9a9e" 2025-03-26 15:49:16.025606 | orchestrator |  }, 2025-03-26 15:49:16.027384 | orchestrator |  "sdc": { 2025-03-26 15:49:16.028528 | orchestrator |  "osd_lvm_uuid": "e5825628-1db3-5928-b5a4-d100b751e871" 2025-03-26 15:49:16.029601 | orchestrator |  } 2025-03-26 15:49:16.030593 | orchestrator |  } 2025-03-26 15:49:16.031477 | orchestrator | } 2025-03-26 15:49:16.031717 | orchestrator | 2025-03-26 15:49:16.032583 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-03-26 15:49:16.033591 | orchestrator | Wednesday 26 March 2025 15:49:16 +0000 (0:00:00.171) 0:00:32.176 ******* 2025-03-26 15:49:16.165293 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:49:16.165786 | orchestrator | 2025-03-26 15:49:16.166464 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-03-26 15:49:16.167076 | orchestrator | Wednesday 26 March 2025 15:49:16 +0000 (0:00:00.144) 0:00:32.320 ******* 2025-03-26 15:49:16.322256 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:49:16.322807 | orchestrator | 2025-03-26 15:49:16.322849 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-03-26 15:49:16.460444 | orchestrator | Wednesday 26 March 2025 15:49:16 +0000 (0:00:00.157) 0:00:32.478 ******* 2025-03-26 15:49:16.460535 | orchestrator | skipping: [testbed-node-4] 2025-03-26 15:49:16.461169 | orchestrator | 2025-03-26 15:49:16.462195 | orchestrator | TASK [Print configuration data] ************************************************ 2025-03-26 15:49:16.462229 | orchestrator | Wednesday 26 March 2025 15:49:16 +0000 (0:00:00.138) 0:00:32.617 ******* 2025-03-26 15:49:17.105023 | orchestrator | changed: [testbed-node-4] => { 2025-03-26 15:49:17.105708 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-03-26 15:49:17.106858 | orchestrator |  "ceph_osd_devices": { 2025-03-26 15:49:17.110644 | orchestrator |  "sdb": { 2025-03-26 15:49:17.111316 | orchestrator |  "osd_lvm_uuid": "3792eb61-f724-5898-a357-e4730d4e9a9e" 2025-03-26 15:49:17.112916 | orchestrator |  }, 2025-03-26 15:49:17.113617 | orchestrator |  "sdc": { 2025-03-26 15:49:17.114271 | orchestrator |  "osd_lvm_uuid": "e5825628-1db3-5928-b5a4-d100b751e871" 2025-03-26 15:49:17.114551 | orchestrator |  } 2025-03-26 15:49:17.116041 | orchestrator |  }, 2025-03-26 15:49:17.116452 | orchestrator |  "lvm_volumes": [ 2025-03-26 15:49:17.117305 | orchestrator |  { 2025-03-26 15:49:17.117645 | orchestrator |  "data": "osd-block-3792eb61-f724-5898-a357-e4730d4e9a9e", 2025-03-26 15:49:17.117883 | orchestrator |  "data_vg": "ceph-3792eb61-f724-5898-a357-e4730d4e9a9e" 2025-03-26 15:49:17.118626 | orchestrator |  }, 2025-03-26 15:49:17.118694 | orchestrator |  { 2025-03-26 15:49:17.119095 | orchestrator |  "data": "osd-block-e5825628-1db3-5928-b5a4-d100b751e871", 2025-03-26 15:49:17.119704 | orchestrator |  "data_vg": "ceph-e5825628-1db3-5928-b5a4-d100b751e871" 2025-03-26 15:49:17.120210 | orchestrator |  } 2025-03-26 15:49:17.120672 | orchestrator |  ] 2025-03-26 15:49:17.121535 | orchestrator |  } 2025-03-26 15:49:17.122505 | orchestrator | } 2025-03-26 15:49:17.122904 | orchestrator | 2025-03-26 15:49:17.123705 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-03-26 15:49:17.124209 | orchestrator | Wednesday 26 March 2025 15:49:17 +0000 (0:00:00.643) 0:00:33.260 ******* 2025-03-26 15:49:18.619124 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-03-26 15:49:18.619351 | orchestrator | 2025-03-26 15:49:18.620461 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-03-26 15:49:18.621293 | orchestrator | 2025-03-26 15:49:18.623009 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-03-26 15:49:18.878300 | orchestrator | Wednesday 26 March 2025 15:49:18 +0000 (0:00:01.513) 0:00:34.773 ******* 2025-03-26 15:49:18.878497 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-03-26 15:49:18.878573 | orchestrator | 2025-03-26 15:49:18.880487 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-03-26 15:49:18.881250 | orchestrator | Wednesday 26 March 2025 15:49:18 +0000 (0:00:00.260) 0:00:35.034 ******* 2025-03-26 15:49:19.529994 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:49:19.530189 | orchestrator | 2025-03-26 15:49:19.531101 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 15:49:19.532274 | orchestrator | Wednesday 26 March 2025 15:49:19 +0000 (0:00:00.651) 0:00:35.685 ******* 2025-03-26 15:49:19.990502 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-03-26 15:49:19.991061 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-03-26 15:49:19.991951 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-03-26 15:49:19.994577 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-03-26 15:49:19.995318 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-03-26 15:49:19.996148 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-03-26 15:49:19.996826 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-03-26 15:49:19.998122 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-03-26 15:49:19.998362 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-03-26 15:49:19.999637 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-03-26 15:49:20.000357 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-03-26 15:49:20.001063 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-03-26 15:49:20.001892 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-03-26 15:49:20.002362 | orchestrator | 2025-03-26 15:49:20.002846 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 15:49:20.003296 | orchestrator | Wednesday 26 March 2025 15:49:19 +0000 (0:00:00.458) 0:00:36.144 ******* 2025-03-26 15:49:20.248017 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:49:20.249175 | orchestrator | 2025-03-26 15:49:20.250528 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 15:49:20.250813 | orchestrator | Wednesday 26 March 2025 15:49:20 +0000 (0:00:00.259) 0:00:36.403 ******* 2025-03-26 15:49:20.449857 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:49:20.451154 | orchestrator | 2025-03-26 15:49:20.452126 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 15:49:20.453116 | orchestrator | Wednesday 26 March 2025 15:49:20 +0000 (0:00:00.201) 0:00:36.605 ******* 2025-03-26 15:49:20.668371 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:49:20.668675 | orchestrator | 2025-03-26 15:49:20.669499 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 15:49:20.670179 | orchestrator | Wednesday 26 March 2025 15:49:20 +0000 (0:00:00.219) 0:00:36.824 ******* 2025-03-26 15:49:20.878711 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:49:20.879714 | orchestrator | 2025-03-26 15:49:20.880118 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 15:49:20.880744 | orchestrator | Wednesday 26 March 2025 15:49:20 +0000 (0:00:00.210) 0:00:37.034 ******* 2025-03-26 15:49:21.101479 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:49:21.102100 | orchestrator | 2025-03-26 15:49:21.102139 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 15:49:21.102165 | orchestrator | Wednesday 26 March 2025 15:49:21 +0000 (0:00:00.221) 0:00:37.256 ******* 2025-03-26 15:49:21.300727 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:49:21.301222 | orchestrator | 2025-03-26 15:49:21.301539 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 15:49:21.302651 | orchestrator | Wednesday 26 March 2025 15:49:21 +0000 (0:00:00.199) 0:00:37.456 ******* 2025-03-26 15:49:21.506069 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:49:21.506536 | orchestrator | 2025-03-26 15:49:21.506661 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 15:49:21.507456 | orchestrator | Wednesday 26 March 2025 15:49:21 +0000 (0:00:00.205) 0:00:37.661 ******* 2025-03-26 15:49:21.770648 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:49:21.772222 | orchestrator | 2025-03-26 15:49:21.772255 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 15:49:21.775263 | orchestrator | Wednesday 26 March 2025 15:49:21 +0000 (0:00:00.259) 0:00:37.921 ******* 2025-03-26 15:49:22.484648 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_778c827f-9114-48f3-8218-520802856b8b) 2025-03-26 15:49:22.484767 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_778c827f-9114-48f3-8218-520802856b8b) 2025-03-26 15:49:22.485636 | orchestrator | 2025-03-26 15:49:22.486599 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 15:49:22.487567 | orchestrator | Wednesday 26 March 2025 15:49:22 +0000 (0:00:00.719) 0:00:38.640 ******* 2025-03-26 15:49:23.007716 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_47c04996-6b70-4edc-bb1e-4eda8ee8f7b1) 2025-03-26 15:49:23.008617 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_47c04996-6b70-4edc-bb1e-4eda8ee8f7b1) 2025-03-26 15:49:23.009622 | orchestrator | 2025-03-26 15:49:23.010705 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 15:49:23.011591 | orchestrator | Wednesday 26 March 2025 15:49:23 +0000 (0:00:00.519) 0:00:39.160 ******* 2025-03-26 15:49:23.461464 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_018eeca1-6df5-40cc-92d0-1bd4b2f4c48a) 2025-03-26 15:49:23.462609 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_018eeca1-6df5-40cc-92d0-1bd4b2f4c48a) 2025-03-26 15:49:23.462650 | orchestrator | 2025-03-26 15:49:23.463694 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 15:49:23.464685 | orchestrator | Wednesday 26 March 2025 15:49:23 +0000 (0:00:00.456) 0:00:39.617 ******* 2025-03-26 15:49:23.949305 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_173128c2-f7b8-43d8-ad86-e66bd69e13dc) 2025-03-26 15:49:23.952006 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_173128c2-f7b8-43d8-ad86-e66bd69e13dc) 2025-03-26 15:49:23.953092 | orchestrator | 2025-03-26 15:49:23.953347 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 15:49:23.954180 | orchestrator | Wednesday 26 March 2025 15:49:23 +0000 (0:00:00.485) 0:00:40.102 ******* 2025-03-26 15:49:24.314245 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-03-26 15:49:24.315287 | orchestrator | 2025-03-26 15:49:24.318859 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 15:49:24.827872 | orchestrator | Wednesday 26 March 2025 15:49:24 +0000 (0:00:00.366) 0:00:40.469 ******* 2025-03-26 15:49:24.828003 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-03-26 15:49:24.828123 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-03-26 15:49:24.829213 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-03-26 15:49:24.831123 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-03-26 15:49:24.831195 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-03-26 15:49:24.831219 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-03-26 15:49:24.832309 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-03-26 15:49:24.832594 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-03-26 15:49:24.834095 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-03-26 15:49:24.835058 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-03-26 15:49:24.835097 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-03-26 15:49:24.835870 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-03-26 15:49:24.837499 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-03-26 15:49:24.838856 | orchestrator | 2025-03-26 15:49:24.839340 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 15:49:24.841130 | orchestrator | Wednesday 26 March 2025 15:49:24 +0000 (0:00:00.512) 0:00:40.981 ******* 2025-03-26 15:49:25.053560 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:49:25.054425 | orchestrator | 2025-03-26 15:49:25.054466 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 15:49:25.054550 | orchestrator | Wednesday 26 March 2025 15:49:25 +0000 (0:00:00.227) 0:00:41.209 ******* 2025-03-26 15:49:25.292168 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:49:25.292461 | orchestrator | 2025-03-26 15:49:25.294113 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 15:49:25.294502 | orchestrator | Wednesday 26 March 2025 15:49:25 +0000 (0:00:00.239) 0:00:41.448 ******* 2025-03-26 15:49:25.513369 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:49:25.513929 | orchestrator | 2025-03-26 15:49:25.514332 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 15:49:25.514715 | orchestrator | Wednesday 26 March 2025 15:49:25 +0000 (0:00:00.221) 0:00:41.669 ******* 2025-03-26 15:49:26.202382 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:49:26.203240 | orchestrator | 2025-03-26 15:49:26.204068 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 15:49:26.205120 | orchestrator | Wednesday 26 March 2025 15:49:26 +0000 (0:00:00.687) 0:00:42.357 ******* 2025-03-26 15:49:26.434577 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:49:26.434714 | orchestrator | 2025-03-26 15:49:26.435880 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 15:49:26.436066 | orchestrator | Wednesday 26 March 2025 15:49:26 +0000 (0:00:00.232) 0:00:42.590 ******* 2025-03-26 15:49:26.642085 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:49:26.643423 | orchestrator | 2025-03-26 15:49:26.644124 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 15:49:26.644157 | orchestrator | Wednesday 26 March 2025 15:49:26 +0000 (0:00:00.207) 0:00:42.797 ******* 2025-03-26 15:49:26.875498 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:49:26.875958 | orchestrator | 2025-03-26 15:49:26.878853 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 15:49:26.879565 | orchestrator | Wednesday 26 March 2025 15:49:26 +0000 (0:00:00.231) 0:00:43.029 ******* 2025-03-26 15:49:27.101612 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:49:27.101947 | orchestrator | 2025-03-26 15:49:27.103256 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 15:49:27.104371 | orchestrator | Wednesday 26 March 2025 15:49:27 +0000 (0:00:00.225) 0:00:43.255 ******* 2025-03-26 15:49:27.794294 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-03-26 15:49:27.795303 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-03-26 15:49:27.795339 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-03-26 15:49:27.796187 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-03-26 15:49:27.797064 | orchestrator | 2025-03-26 15:49:27.798665 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 15:49:27.799384 | orchestrator | Wednesday 26 March 2025 15:49:27 +0000 (0:00:00.691) 0:00:43.946 ******* 2025-03-26 15:49:28.008528 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:49:28.009759 | orchestrator | 2025-03-26 15:49:28.010674 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 15:49:28.013001 | orchestrator | Wednesday 26 March 2025 15:49:28 +0000 (0:00:00.217) 0:00:44.163 ******* 2025-03-26 15:49:28.228913 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:49:28.229911 | orchestrator | 2025-03-26 15:49:28.230631 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 15:49:28.231566 | orchestrator | Wednesday 26 March 2025 15:49:28 +0000 (0:00:00.217) 0:00:44.381 ******* 2025-03-26 15:49:28.443295 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:49:28.444197 | orchestrator | 2025-03-26 15:49:28.445870 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 15:49:28.694940 | orchestrator | Wednesday 26 March 2025 15:49:28 +0000 (0:00:00.218) 0:00:44.599 ******* 2025-03-26 15:49:28.695010 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:49:28.696895 | orchestrator | 2025-03-26 15:49:28.697981 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-03-26 15:49:28.699125 | orchestrator | Wednesday 26 March 2025 15:49:28 +0000 (0:00:00.250) 0:00:44.849 ******* 2025-03-26 15:49:29.130689 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-03-26 15:49:29.132742 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-03-26 15:49:29.134655 | orchestrator | 2025-03-26 15:49:29.137459 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-03-26 15:49:29.139172 | orchestrator | Wednesday 26 March 2025 15:49:29 +0000 (0:00:00.435) 0:00:45.285 ******* 2025-03-26 15:49:29.292483 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:49:29.294364 | orchestrator | 2025-03-26 15:49:29.296206 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-03-26 15:49:29.297272 | orchestrator | Wednesday 26 March 2025 15:49:29 +0000 (0:00:00.162) 0:00:45.447 ******* 2025-03-26 15:49:29.434117 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:49:29.434526 | orchestrator | 2025-03-26 15:49:29.435182 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-03-26 15:49:29.435551 | orchestrator | Wednesday 26 March 2025 15:49:29 +0000 (0:00:00.143) 0:00:45.590 ******* 2025-03-26 15:49:29.602326 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:49:29.603205 | orchestrator | 2025-03-26 15:49:29.604382 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-03-26 15:49:29.604848 | orchestrator | Wednesday 26 March 2025 15:49:29 +0000 (0:00:00.167) 0:00:45.758 ******* 2025-03-26 15:49:29.756682 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:49:29.756868 | orchestrator | 2025-03-26 15:49:29.756890 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-03-26 15:49:29.757974 | orchestrator | Wednesday 26 March 2025 15:49:29 +0000 (0:00:00.154) 0:00:45.912 ******* 2025-03-26 15:49:29.970203 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a669efbe-38ad-5491-a7b2-472b52b48777'}}) 2025-03-26 15:49:29.971019 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ec9d4bae-9cf1-5a1f-8035-4dbd27640959'}}) 2025-03-26 15:49:29.971038 | orchestrator | 2025-03-26 15:49:29.971488 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-03-26 15:49:30.140086 | orchestrator | Wednesday 26 March 2025 15:49:29 +0000 (0:00:00.210) 0:00:46.123 ******* 2025-03-26 15:49:30.140164 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a669efbe-38ad-5491-a7b2-472b52b48777'}})  2025-03-26 15:49:30.140661 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ec9d4bae-9cf1-5a1f-8035-4dbd27640959'}})  2025-03-26 15:49:30.142845 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:49:30.145295 | orchestrator | 2025-03-26 15:49:30.145751 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-03-26 15:49:30.146145 | orchestrator | Wednesday 26 March 2025 15:49:30 +0000 (0:00:00.170) 0:00:46.294 ******* 2025-03-26 15:49:30.306850 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a669efbe-38ad-5491-a7b2-472b52b48777'}})  2025-03-26 15:49:30.307433 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ec9d4bae-9cf1-5a1f-8035-4dbd27640959'}})  2025-03-26 15:49:30.308120 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:49:30.309135 | orchestrator | 2025-03-26 15:49:30.309705 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-03-26 15:49:30.312925 | orchestrator | Wednesday 26 March 2025 15:49:30 +0000 (0:00:00.168) 0:00:46.463 ******* 2025-03-26 15:49:30.487549 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a669efbe-38ad-5491-a7b2-472b52b48777'}})  2025-03-26 15:49:30.488257 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ec9d4bae-9cf1-5a1f-8035-4dbd27640959'}})  2025-03-26 15:49:30.488381 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:49:30.489090 | orchestrator | 2025-03-26 15:49:30.489686 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-03-26 15:49:30.490483 | orchestrator | Wednesday 26 March 2025 15:49:30 +0000 (0:00:00.180) 0:00:46.643 ******* 2025-03-26 15:49:30.626775 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:49:30.627509 | orchestrator | 2025-03-26 15:49:30.629150 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-03-26 15:49:30.629777 | orchestrator | Wednesday 26 March 2025 15:49:30 +0000 (0:00:00.139) 0:00:46.782 ******* 2025-03-26 15:49:30.789265 | orchestrator | ok: [testbed-node-5] 2025-03-26 15:49:30.789385 | orchestrator | 2025-03-26 15:49:30.789952 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-03-26 15:49:30.790835 | orchestrator | Wednesday 26 March 2025 15:49:30 +0000 (0:00:00.163) 0:00:46.945 ******* 2025-03-26 15:49:30.939508 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:49:30.941723 | orchestrator | 2025-03-26 15:49:30.941741 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-03-26 15:49:30.943054 | orchestrator | Wednesday 26 March 2025 15:49:30 +0000 (0:00:00.144) 0:00:47.090 ******* 2025-03-26 15:49:31.300464 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:49:31.300648 | orchestrator | 2025-03-26 15:49:31.302303 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-03-26 15:49:31.305449 | orchestrator | Wednesday 26 March 2025 15:49:31 +0000 (0:00:00.365) 0:00:47.455 ******* 2025-03-26 15:49:31.455730 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:49:31.455964 | orchestrator | 2025-03-26 15:49:31.456568 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-03-26 15:49:31.459381 | orchestrator | Wednesday 26 March 2025 15:49:31 +0000 (0:00:00.154) 0:00:47.610 ******* 2025-03-26 15:49:31.609862 | orchestrator | ok: [testbed-node-5] => { 2025-03-26 15:49:31.610251 | orchestrator |  "ceph_osd_devices": { 2025-03-26 15:49:31.611550 | orchestrator |  "sdb": { 2025-03-26 15:49:31.613611 | orchestrator |  "osd_lvm_uuid": "a669efbe-38ad-5491-a7b2-472b52b48777" 2025-03-26 15:49:31.614289 | orchestrator |  }, 2025-03-26 15:49:31.616204 | orchestrator |  "sdc": { 2025-03-26 15:49:31.617084 | orchestrator |  "osd_lvm_uuid": "ec9d4bae-9cf1-5a1f-8035-4dbd27640959" 2025-03-26 15:49:31.618120 | orchestrator |  } 2025-03-26 15:49:31.618648 | orchestrator |  } 2025-03-26 15:49:31.619530 | orchestrator | } 2025-03-26 15:49:31.620271 | orchestrator | 2025-03-26 15:49:31.620978 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-03-26 15:49:31.621749 | orchestrator | Wednesday 26 March 2025 15:49:31 +0000 (0:00:00.153) 0:00:47.763 ******* 2025-03-26 15:49:31.767361 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:49:31.768130 | orchestrator | 2025-03-26 15:49:31.772514 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-03-26 15:49:31.772915 | orchestrator | Wednesday 26 March 2025 15:49:31 +0000 (0:00:00.158) 0:00:47.921 ******* 2025-03-26 15:49:31.916621 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:49:31.920216 | orchestrator | 2025-03-26 15:49:32.066699 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-03-26 15:49:32.066813 | orchestrator | Wednesday 26 March 2025 15:49:31 +0000 (0:00:00.147) 0:00:48.069 ******* 2025-03-26 15:49:32.066841 | orchestrator | skipping: [testbed-node-5] 2025-03-26 15:49:32.068032 | orchestrator | 2025-03-26 15:49:32.070204 | orchestrator | TASK [Print configuration data] ************************************************ 2025-03-26 15:49:32.071743 | orchestrator | Wednesday 26 March 2025 15:49:32 +0000 (0:00:00.151) 0:00:48.221 ******* 2025-03-26 15:49:32.367530 | orchestrator | changed: [testbed-node-5] => { 2025-03-26 15:49:32.368684 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-03-26 15:49:32.368744 | orchestrator |  "ceph_osd_devices": { 2025-03-26 15:49:32.371376 | orchestrator |  "sdb": { 2025-03-26 15:49:32.375157 | orchestrator |  "osd_lvm_uuid": "a669efbe-38ad-5491-a7b2-472b52b48777" 2025-03-26 15:49:32.376218 | orchestrator |  }, 2025-03-26 15:49:32.376772 | orchestrator |  "sdc": { 2025-03-26 15:49:32.377728 | orchestrator |  "osd_lvm_uuid": "ec9d4bae-9cf1-5a1f-8035-4dbd27640959" 2025-03-26 15:49:32.378762 | orchestrator |  } 2025-03-26 15:49:32.379738 | orchestrator |  }, 2025-03-26 15:49:32.380578 | orchestrator |  "lvm_volumes": [ 2025-03-26 15:49:32.381023 | orchestrator |  { 2025-03-26 15:49:32.382011 | orchestrator |  "data": "osd-block-a669efbe-38ad-5491-a7b2-472b52b48777", 2025-03-26 15:49:32.386590 | orchestrator |  "data_vg": "ceph-a669efbe-38ad-5491-a7b2-472b52b48777" 2025-03-26 15:49:32.387226 | orchestrator |  }, 2025-03-26 15:49:32.387542 | orchestrator |  { 2025-03-26 15:49:32.388062 | orchestrator |  "data": "osd-block-ec9d4bae-9cf1-5a1f-8035-4dbd27640959", 2025-03-26 15:49:32.388721 | orchestrator |  "data_vg": "ceph-ec9d4bae-9cf1-5a1f-8035-4dbd27640959" 2025-03-26 15:49:32.389335 | orchestrator |  } 2025-03-26 15:49:32.390085 | orchestrator |  ] 2025-03-26 15:49:32.390415 | orchestrator |  } 2025-03-26 15:49:32.391034 | orchestrator | } 2025-03-26 15:49:32.391380 | orchestrator | 2025-03-26 15:49:32.392096 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-03-26 15:49:32.392496 | orchestrator | Wednesday 26 March 2025 15:49:32 +0000 (0:00:00.298) 0:00:48.520 ******* 2025-03-26 15:49:33.782499 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-03-26 15:49:33.783128 | orchestrator | 2025-03-26 15:49:33.783500 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-26 15:49:33.784083 | orchestrator | 2025-03-26 15:49:33 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-26 15:49:33.784557 | orchestrator | 2025-03-26 15:49:33 | INFO  | Please wait and do not abort execution. 2025-03-26 15:49:33.786090 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-03-26 15:49:33.787038 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-03-26 15:49:33.787655 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-03-26 15:49:33.788420 | orchestrator | 2025-03-26 15:49:33.789728 | orchestrator | 2025-03-26 15:49:33.790120 | orchestrator | 2025-03-26 15:49:33.791108 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-26 15:49:33.791568 | orchestrator | Wednesday 26 March 2025 15:49:33 +0000 (0:00:01.416) 0:00:49.937 ******* 2025-03-26 15:49:33.792562 | orchestrator | =============================================================================== 2025-03-26 15:49:33.792947 | orchestrator | Write configuration file ------------------------------------------------ 5.43s 2025-03-26 15:49:33.793490 | orchestrator | Add known partitions to the list of available block devices ------------- 1.68s 2025-03-26 15:49:33.794316 | orchestrator | Add known links to the list of available block devices ------------------ 1.49s 2025-03-26 15:49:33.794479 | orchestrator | Print configuration data ------------------------------------------------ 1.27s 2025-03-26 15:49:33.795061 | orchestrator | Get initial list of available block devices ----------------------------- 1.20s 2025-03-26 15:49:33.795512 | orchestrator | Add known partitions to the list of available block devices ------------- 1.14s 2025-03-26 15:49:33.796353 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.86s 2025-03-26 15:49:33.797422 | orchestrator | Add known links to the list of available block devices ------------------ 0.85s 2025-03-26 15:49:33.798149 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.84s 2025-03-26 15:49:33.799677 | orchestrator | Add known links to the list of available block devices ------------------ 0.84s 2025-03-26 15:49:33.800097 | orchestrator | Add known partitions to the list of available block devices ------------- 0.82s 2025-03-26 15:49:33.800520 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.75s 2025-03-26 15:49:33.801316 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2025-03-26 15:49:33.802333 | orchestrator | Add known links to the list of available block devices ------------------ 0.71s 2025-03-26 15:49:33.803951 | orchestrator | Set WAL devices config data --------------------------------------------- 0.71s 2025-03-26 15:49:33.804585 | orchestrator | Generate WAL VG names --------------------------------------------------- 0.70s 2025-03-26 15:49:33.805476 | orchestrator | Add known partitions to the list of available block devices ------------- 0.69s 2025-03-26 15:49:33.806205 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2025-03-26 15:49:33.806665 | orchestrator | Set DB+WAL devices config data ------------------------------------------ 0.69s 2025-03-26 15:49:33.807255 | orchestrator | Add known partitions to the list of available block devices ------------- 0.69s 2025-03-26 15:49:46.189162 | orchestrator | 2025-03-26 15:49:46 | INFO  | Task 24acbc1c-c128-427e-b9dc-8635acd1e7a6 is running in background. Output coming soon. 2025-03-26 16:49:48.824729 | orchestrator | 2025-03-26 16:49:48 | INFO  | Task 97638219-c517-419b-824a-56ba7a29a109 (ceph-create-lvm-devices) was prepared for execution. 2025-03-26 16:49:52.380767 | orchestrator | 2025-03-26 16:49:48 | INFO  | It takes a moment until task 97638219-c517-419b-824a-56ba7a29a109 (ceph-create-lvm-devices) has been started and output is visible here. 2025-03-26 16:49:52.380917 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-03-26 16:49:52.944124 | orchestrator | 2025-03-26 16:49:52.945307 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-03-26 16:49:52.948179 | orchestrator | 2025-03-26 16:49:53.217232 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-03-26 16:49:53.217299 | orchestrator | Wednesday 26 March 2025 16:49:52 +0000 (0:00:00.472) 0:00:00.472 ******* 2025-03-26 16:49:53.217326 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-03-26 16:49:53.217645 | orchestrator | 2025-03-26 16:49:53.217707 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-03-26 16:49:53.218227 | orchestrator | Wednesday 26 March 2025 16:49:53 +0000 (0:00:00.274) 0:00:00.747 ******* 2025-03-26 16:49:53.449832 | orchestrator | ok: [testbed-node-3] 2025-03-26 16:49:53.451304 | orchestrator | 2025-03-26 16:49:53.452646 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 16:49:53.453546 | orchestrator | Wednesday 26 March 2025 16:49:53 +0000 (0:00:00.231) 0:00:00.979 ******* 2025-03-26 16:49:54.275703 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-03-26 16:49:54.276173 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-03-26 16:49:54.277116 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-03-26 16:49:54.277391 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-03-26 16:49:54.278149 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-03-26 16:49:54.278549 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-03-26 16:49:54.279554 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-03-26 16:49:54.279869 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-03-26 16:49:54.280895 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-03-26 16:49:54.281047 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-03-26 16:49:54.281615 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-03-26 16:49:54.282066 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-03-26 16:49:54.282095 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-03-26 16:49:54.282187 | orchestrator | 2025-03-26 16:49:54.282447 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 16:49:54.282474 | orchestrator | Wednesday 26 March 2025 16:49:54 +0000 (0:00:00.826) 0:00:01.805 ******* 2025-03-26 16:49:54.489143 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:49:54.489477 | orchestrator | 2025-03-26 16:49:54.489523 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 16:49:54.489555 | orchestrator | Wednesday 26 March 2025 16:49:54 +0000 (0:00:00.212) 0:00:02.018 ******* 2025-03-26 16:49:54.716934 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:49:54.717882 | orchestrator | 2025-03-26 16:49:54.719907 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 16:49:54.931000 | orchestrator | Wednesday 26 March 2025 16:49:54 +0000 (0:00:00.229) 0:00:02.247 ******* 2025-03-26 16:49:54.931104 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:49:54.931495 | orchestrator | 2025-03-26 16:49:54.931540 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 16:49:54.932635 | orchestrator | Wednesday 26 March 2025 16:49:54 +0000 (0:00:00.213) 0:00:02.461 ******* 2025-03-26 16:49:55.141703 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:49:55.143040 | orchestrator | 2025-03-26 16:49:55.143640 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 16:49:55.144624 | orchestrator | Wednesday 26 March 2025 16:49:55 +0000 (0:00:00.210) 0:00:02.672 ******* 2025-03-26 16:49:55.354557 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:49:55.355825 | orchestrator | 2025-03-26 16:49:55.356307 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 16:49:55.356983 | orchestrator | Wednesday 26 March 2025 16:49:55 +0000 (0:00:00.212) 0:00:02.884 ******* 2025-03-26 16:49:55.603908 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:49:55.604771 | orchestrator | 2025-03-26 16:49:55.605040 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 16:49:55.605462 | orchestrator | Wednesday 26 March 2025 16:49:55 +0000 (0:00:00.249) 0:00:03.133 ******* 2025-03-26 16:49:55.813252 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:49:55.813655 | orchestrator | 2025-03-26 16:49:55.814065 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 16:49:55.814926 | orchestrator | Wednesday 26 March 2025 16:49:55 +0000 (0:00:00.210) 0:00:03.344 ******* 2025-03-26 16:49:56.023655 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:49:56.023768 | orchestrator | 2025-03-26 16:49:56.024693 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 16:49:56.025581 | orchestrator | Wednesday 26 March 2025 16:49:56 +0000 (0:00:00.209) 0:00:03.554 ******* 2025-03-26 16:49:56.819460 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e8a4f1c8-c63b-414f-8506-2e323c8cffff) 2025-03-26 16:49:56.819662 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e8a4f1c8-c63b-414f-8506-2e323c8cffff) 2025-03-26 16:49:56.819690 | orchestrator | 2025-03-26 16:49:56.820489 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 16:49:56.821352 | orchestrator | Wednesday 26 March 2025 16:49:56 +0000 (0:00:00.791) 0:00:04.346 ******* 2025-03-26 16:49:57.578916 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_538d5b73-1cd2-4d51-a71b-5a4f6cbc6cf8) 2025-03-26 16:49:57.579123 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_538d5b73-1cd2-4d51-a71b-5a4f6cbc6cf8) 2025-03-26 16:49:57.579154 | orchestrator | 2025-03-26 16:49:57.579602 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 16:49:57.580095 | orchestrator | Wednesday 26 March 2025 16:49:57 +0000 (0:00:00.761) 0:00:05.107 ******* 2025-03-26 16:49:58.079394 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_111b4eab-b6cc-4138-b8ce-a1d0ca0ede1d) 2025-03-26 16:49:58.079563 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_111b4eab-b6cc-4138-b8ce-a1d0ca0ede1d) 2025-03-26 16:49:58.079945 | orchestrator | 2025-03-26 16:49:58.080451 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 16:49:58.081036 | orchestrator | Wednesday 26 March 2025 16:49:58 +0000 (0:00:00.502) 0:00:05.610 ******* 2025-03-26 16:49:58.584880 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_a37eb2c8-f05e-43da-8c12-0830eb7c9d71) 2025-03-26 16:49:58.585933 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_a37eb2c8-f05e-43da-8c12-0830eb7c9d71) 2025-03-26 16:49:58.586974 | orchestrator | 2025-03-26 16:49:58.587546 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 16:49:58.588643 | orchestrator | Wednesday 26 March 2025 16:49:58 +0000 (0:00:00.502) 0:00:06.113 ******* 2025-03-26 16:49:58.944013 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-03-26 16:49:58.944549 | orchestrator | 2025-03-26 16:49:58.944581 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 16:49:58.945573 | orchestrator | Wednesday 26 March 2025 16:49:58 +0000 (0:00:00.361) 0:00:06.475 ******* 2025-03-26 16:49:59.467273 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-03-26 16:49:59.467510 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-03-26 16:49:59.467544 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-03-26 16:49:59.468249 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-03-26 16:49:59.470916 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-03-26 16:49:59.471769 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-03-26 16:49:59.472819 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-03-26 16:49:59.473864 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-03-26 16:49:59.473959 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-03-26 16:49:59.474162 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-03-26 16:49:59.475203 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-03-26 16:49:59.475543 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-03-26 16:49:59.476165 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-03-26 16:49:59.476992 | orchestrator | 2025-03-26 16:49:59.477513 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 16:49:59.478738 | orchestrator | Wednesday 26 March 2025 16:49:59 +0000 (0:00:00.521) 0:00:06.997 ******* 2025-03-26 16:49:59.674947 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:49:59.676349 | orchestrator | 2025-03-26 16:49:59.677292 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 16:49:59.678120 | orchestrator | Wednesday 26 March 2025 16:49:59 +0000 (0:00:00.207) 0:00:07.204 ******* 2025-03-26 16:49:59.898878 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:49:59.899241 | orchestrator | 2025-03-26 16:49:59.900102 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 16:49:59.900779 | orchestrator | Wednesday 26 March 2025 16:49:59 +0000 (0:00:00.224) 0:00:07.428 ******* 2025-03-26 16:50:00.137643 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:00.145157 | orchestrator | 2025-03-26 16:50:00.145482 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 16:50:00.146009 | orchestrator | Wednesday 26 March 2025 16:50:00 +0000 (0:00:00.234) 0:00:07.663 ******* 2025-03-26 16:50:00.355618 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:00.356032 | orchestrator | 2025-03-26 16:50:00.356581 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 16:50:00.356951 | orchestrator | Wednesday 26 March 2025 16:50:00 +0000 (0:00:00.222) 0:00:07.886 ******* 2025-03-26 16:50:01.047620 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:01.047823 | orchestrator | 2025-03-26 16:50:01.048522 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 16:50:01.049089 | orchestrator | Wednesday 26 March 2025 16:50:01 +0000 (0:00:00.690) 0:00:08.577 ******* 2025-03-26 16:50:01.286209 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:01.286479 | orchestrator | 2025-03-26 16:50:01.286509 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 16:50:01.286951 | orchestrator | Wednesday 26 March 2025 16:50:01 +0000 (0:00:00.240) 0:00:08.817 ******* 2025-03-26 16:50:01.531579 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:01.531914 | orchestrator | 2025-03-26 16:50:01.532348 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 16:50:01.532770 | orchestrator | Wednesday 26 March 2025 16:50:01 +0000 (0:00:00.245) 0:00:09.063 ******* 2025-03-26 16:50:01.746597 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:01.747034 | orchestrator | 2025-03-26 16:50:01.747069 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 16:50:01.747499 | orchestrator | Wednesday 26 March 2025 16:50:01 +0000 (0:00:00.212) 0:00:09.276 ******* 2025-03-26 16:50:02.544766 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-03-26 16:50:02.544924 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-03-26 16:50:02.545487 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-03-26 16:50:02.546101 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-03-26 16:50:02.546132 | orchestrator | 2025-03-26 16:50:02.546302 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 16:50:02.546806 | orchestrator | Wednesday 26 March 2025 16:50:02 +0000 (0:00:00.799) 0:00:10.075 ******* 2025-03-26 16:50:02.786880 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:02.787812 | orchestrator | 2025-03-26 16:50:02.789107 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 16:50:02.997602 | orchestrator | Wednesday 26 March 2025 16:50:02 +0000 (0:00:00.239) 0:00:10.315 ******* 2025-03-26 16:50:02.997726 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:02.997800 | orchestrator | 2025-03-26 16:50:02.998636 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 16:50:02.999510 | orchestrator | Wednesday 26 March 2025 16:50:02 +0000 (0:00:00.212) 0:00:10.528 ******* 2025-03-26 16:50:03.207577 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:03.208380 | orchestrator | 2025-03-26 16:50:03.208890 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 16:50:03.209703 | orchestrator | Wednesday 26 March 2025 16:50:03 +0000 (0:00:00.209) 0:00:10.738 ******* 2025-03-26 16:50:03.438420 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:03.438628 | orchestrator | 2025-03-26 16:50:03.438665 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-03-26 16:50:03.438727 | orchestrator | Wednesday 26 March 2025 16:50:03 +0000 (0:00:00.231) 0:00:10.969 ******* 2025-03-26 16:50:03.590542 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:03.591046 | orchestrator | 2025-03-26 16:50:03.593904 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-03-26 16:50:03.594583 | orchestrator | Wednesday 26 March 2025 16:50:03 +0000 (0:00:00.149) 0:00:11.118 ******* 2025-03-26 16:50:04.050156 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5f4a1373-2d27-5995-abd5-5c6678505b20'}}) 2025-03-26 16:50:04.051417 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'dd5d4016-0a76-5f2c-8b69-aeca47fee476'}}) 2025-03-26 16:50:04.051493 | orchestrator | 2025-03-26 16:50:04.051558 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-03-26 16:50:04.052016 | orchestrator | Wednesday 26 March 2025 16:50:04 +0000 (0:00:00.462) 0:00:11.581 ******* 2025-03-26 16:50:06.416280 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-5f4a1373-2d27-5995-abd5-5c6678505b20', 'data_vg': 'ceph-5f4a1373-2d27-5995-abd5-5c6678505b20'}) 2025-03-26 16:50:06.416557 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-dd5d4016-0a76-5f2c-8b69-aeca47fee476', 'data_vg': 'ceph-dd5d4016-0a76-5f2c-8b69-aeca47fee476'}) 2025-03-26 16:50:06.417345 | orchestrator | 2025-03-26 16:50:06.417847 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-03-26 16:50:06.418998 | orchestrator | Wednesday 26 March 2025 16:50:06 +0000 (0:00:02.364) 0:00:13.946 ******* 2025-03-26 16:50:06.603113 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5f4a1373-2d27-5995-abd5-5c6678505b20', 'data_vg': 'ceph-5f4a1373-2d27-5995-abd5-5c6678505b20'})  2025-03-26 16:50:06.603402 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dd5d4016-0a76-5f2c-8b69-aeca47fee476', 'data_vg': 'ceph-dd5d4016-0a76-5f2c-8b69-aeca47fee476'})  2025-03-26 16:50:06.603828 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:06.604260 | orchestrator | 2025-03-26 16:50:06.604547 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-03-26 16:50:06.604915 | orchestrator | Wednesday 26 March 2025 16:50:06 +0000 (0:00:00.188) 0:00:14.134 ******* 2025-03-26 16:50:08.217874 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-5f4a1373-2d27-5995-abd5-5c6678505b20', 'data_vg': 'ceph-5f4a1373-2d27-5995-abd5-5c6678505b20'}) 2025-03-26 16:50:08.218187 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-dd5d4016-0a76-5f2c-8b69-aeca47fee476', 'data_vg': 'ceph-dd5d4016-0a76-5f2c-8b69-aeca47fee476'}) 2025-03-26 16:50:08.218673 | orchestrator | 2025-03-26 16:50:08.218708 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-03-26 16:50:08.219099 | orchestrator | Wednesday 26 March 2025 16:50:08 +0000 (0:00:01.609) 0:00:15.744 ******* 2025-03-26 16:50:08.402563 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5f4a1373-2d27-5995-abd5-5c6678505b20', 'data_vg': 'ceph-5f4a1373-2d27-5995-abd5-5c6678505b20'})  2025-03-26 16:50:08.403644 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dd5d4016-0a76-5f2c-8b69-aeca47fee476', 'data_vg': 'ceph-dd5d4016-0a76-5f2c-8b69-aeca47fee476'})  2025-03-26 16:50:08.403692 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:08.404343 | orchestrator | 2025-03-26 16:50:08.404951 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-03-26 16:50:08.406005 | orchestrator | Wednesday 26 March 2025 16:50:08 +0000 (0:00:00.186) 0:00:15.930 ******* 2025-03-26 16:50:08.560928 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:08.561541 | orchestrator | 2025-03-26 16:50:08.562056 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-03-26 16:50:08.564505 | orchestrator | Wednesday 26 March 2025 16:50:08 +0000 (0:00:00.159) 0:00:16.089 ******* 2025-03-26 16:50:08.773028 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5f4a1373-2d27-5995-abd5-5c6678505b20', 'data_vg': 'ceph-5f4a1373-2d27-5995-abd5-5c6678505b20'})  2025-03-26 16:50:08.774067 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dd5d4016-0a76-5f2c-8b69-aeca47fee476', 'data_vg': 'ceph-dd5d4016-0a76-5f2c-8b69-aeca47fee476'})  2025-03-26 16:50:08.774318 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:08.776936 | orchestrator | 2025-03-26 16:50:08.939756 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-03-26 16:50:08.939844 | orchestrator | Wednesday 26 March 2025 16:50:08 +0000 (0:00:00.213) 0:00:16.303 ******* 2025-03-26 16:50:08.939871 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:08.940765 | orchestrator | 2025-03-26 16:50:08.941240 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-03-26 16:50:08.941488 | orchestrator | Wednesday 26 March 2025 16:50:08 +0000 (0:00:00.168) 0:00:16.471 ******* 2025-03-26 16:50:09.126992 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5f4a1373-2d27-5995-abd5-5c6678505b20', 'data_vg': 'ceph-5f4a1373-2d27-5995-abd5-5c6678505b20'})  2025-03-26 16:50:09.127879 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dd5d4016-0a76-5f2c-8b69-aeca47fee476', 'data_vg': 'ceph-dd5d4016-0a76-5f2c-8b69-aeca47fee476'})  2025-03-26 16:50:09.128455 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:09.130489 | orchestrator | 2025-03-26 16:50:09.131965 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-03-26 16:50:09.133732 | orchestrator | Wednesday 26 March 2025 16:50:09 +0000 (0:00:00.185) 0:00:16.657 ******* 2025-03-26 16:50:09.465655 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:09.465796 | orchestrator | 2025-03-26 16:50:09.466429 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-03-26 16:50:09.467465 | orchestrator | Wednesday 26 March 2025 16:50:09 +0000 (0:00:00.339) 0:00:16.997 ******* 2025-03-26 16:50:09.665571 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5f4a1373-2d27-5995-abd5-5c6678505b20', 'data_vg': 'ceph-5f4a1373-2d27-5995-abd5-5c6678505b20'})  2025-03-26 16:50:09.666746 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dd5d4016-0a76-5f2c-8b69-aeca47fee476', 'data_vg': 'ceph-dd5d4016-0a76-5f2c-8b69-aeca47fee476'})  2025-03-26 16:50:09.667185 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:09.667600 | orchestrator | 2025-03-26 16:50:09.669902 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-03-26 16:50:09.866202 | orchestrator | Wednesday 26 March 2025 16:50:09 +0000 (0:00:00.198) 0:00:17.195 ******* 2025-03-26 16:50:09.866246 | orchestrator | ok: [testbed-node-3] 2025-03-26 16:50:09.866605 | orchestrator | 2025-03-26 16:50:09.867227 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-03-26 16:50:09.867462 | orchestrator | Wednesday 26 March 2025 16:50:09 +0000 (0:00:00.201) 0:00:17.396 ******* 2025-03-26 16:50:10.084472 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5f4a1373-2d27-5995-abd5-5c6678505b20', 'data_vg': 'ceph-5f4a1373-2d27-5995-abd5-5c6678505b20'})  2025-03-26 16:50:10.085124 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dd5d4016-0a76-5f2c-8b69-aeca47fee476', 'data_vg': 'ceph-dd5d4016-0a76-5f2c-8b69-aeca47fee476'})  2025-03-26 16:50:10.085529 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:10.086006 | orchestrator | 2025-03-26 16:50:10.086506 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-03-26 16:50:10.086786 | orchestrator | Wednesday 26 March 2025 16:50:10 +0000 (0:00:00.219) 0:00:17.616 ******* 2025-03-26 16:50:10.319705 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5f4a1373-2d27-5995-abd5-5c6678505b20', 'data_vg': 'ceph-5f4a1373-2d27-5995-abd5-5c6678505b20'})  2025-03-26 16:50:10.320522 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dd5d4016-0a76-5f2c-8b69-aeca47fee476', 'data_vg': 'ceph-dd5d4016-0a76-5f2c-8b69-aeca47fee476'})  2025-03-26 16:50:10.320581 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:10.321159 | orchestrator | 2025-03-26 16:50:10.321677 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-03-26 16:50:10.323960 | orchestrator | Wednesday 26 March 2025 16:50:10 +0000 (0:00:00.233) 0:00:17.849 ******* 2025-03-26 16:50:10.531894 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5f4a1373-2d27-5995-abd5-5c6678505b20', 'data_vg': 'ceph-5f4a1373-2d27-5995-abd5-5c6678505b20'})  2025-03-26 16:50:10.532085 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dd5d4016-0a76-5f2c-8b69-aeca47fee476', 'data_vg': 'ceph-dd5d4016-0a76-5f2c-8b69-aeca47fee476'})  2025-03-26 16:50:10.533471 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:10.533875 | orchestrator | 2025-03-26 16:50:10.534323 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-03-26 16:50:10.534846 | orchestrator | Wednesday 26 March 2025 16:50:10 +0000 (0:00:00.212) 0:00:18.061 ******* 2025-03-26 16:50:10.676599 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:10.678548 | orchestrator | 2025-03-26 16:50:10.680106 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-03-26 16:50:10.682535 | orchestrator | Wednesday 26 March 2025 16:50:10 +0000 (0:00:00.144) 0:00:18.206 ******* 2025-03-26 16:50:10.831198 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:10.833202 | orchestrator | 2025-03-26 16:50:10.834084 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-03-26 16:50:10.834114 | orchestrator | Wednesday 26 March 2025 16:50:10 +0000 (0:00:00.154) 0:00:18.361 ******* 2025-03-26 16:50:10.993883 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:10.995595 | orchestrator | 2025-03-26 16:50:10.997455 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-03-26 16:50:10.998677 | orchestrator | Wednesday 26 March 2025 16:50:10 +0000 (0:00:00.162) 0:00:18.523 ******* 2025-03-26 16:50:11.153432 | orchestrator | ok: [testbed-node-3] => { 2025-03-26 16:50:11.153878 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-03-26 16:50:11.155033 | orchestrator | } 2025-03-26 16:50:11.156162 | orchestrator | 2025-03-26 16:50:11.156695 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-03-26 16:50:11.157273 | orchestrator | Wednesday 26 March 2025 16:50:11 +0000 (0:00:00.160) 0:00:18.684 ******* 2025-03-26 16:50:11.321936 | orchestrator | ok: [testbed-node-3] => { 2025-03-26 16:50:11.322487 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-03-26 16:50:11.323501 | orchestrator | } 2025-03-26 16:50:11.324592 | orchestrator | 2025-03-26 16:50:11.324797 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-03-26 16:50:11.325921 | orchestrator | Wednesday 26 March 2025 16:50:11 +0000 (0:00:00.168) 0:00:18.853 ******* 2025-03-26 16:50:11.716663 | orchestrator | ok: [testbed-node-3] => { 2025-03-26 16:50:11.717173 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-03-26 16:50:11.717241 | orchestrator | } 2025-03-26 16:50:11.718392 | orchestrator | 2025-03-26 16:50:11.719328 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-03-26 16:50:11.720745 | orchestrator | Wednesday 26 March 2025 16:50:11 +0000 (0:00:00.393) 0:00:19.246 ******* 2025-03-26 16:50:12.459206 | orchestrator | ok: [testbed-node-3] 2025-03-26 16:50:12.459883 | orchestrator | 2025-03-26 16:50:12.460889 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-03-26 16:50:12.461557 | orchestrator | Wednesday 26 March 2025 16:50:12 +0000 (0:00:00.743) 0:00:19.989 ******* 2025-03-26 16:50:13.022667 | orchestrator | ok: [testbed-node-3] 2025-03-26 16:50:13.022803 | orchestrator | 2025-03-26 16:50:13.023237 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-03-26 16:50:13.023786 | orchestrator | Wednesday 26 March 2025 16:50:13 +0000 (0:00:00.562) 0:00:20.551 ******* 2025-03-26 16:50:13.573992 | orchestrator | ok: [testbed-node-3] 2025-03-26 16:50:13.574218 | orchestrator | 2025-03-26 16:50:13.574954 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-03-26 16:50:13.575481 | orchestrator | Wednesday 26 March 2025 16:50:13 +0000 (0:00:00.552) 0:00:21.104 ******* 2025-03-26 16:50:13.736637 | orchestrator | ok: [testbed-node-3] 2025-03-26 16:50:13.736969 | orchestrator | 2025-03-26 16:50:13.737380 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-03-26 16:50:13.737913 | orchestrator | Wednesday 26 March 2025 16:50:13 +0000 (0:00:00.163) 0:00:21.267 ******* 2025-03-26 16:50:13.858256 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:13.858692 | orchestrator | 2025-03-26 16:50:13.858966 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-03-26 16:50:13.859681 | orchestrator | Wednesday 26 March 2025 16:50:13 +0000 (0:00:00.119) 0:00:21.387 ******* 2025-03-26 16:50:13.966492 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:14.140862 | orchestrator | 2025-03-26 16:50:14.140947 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-03-26 16:50:14.140964 | orchestrator | Wednesday 26 March 2025 16:50:13 +0000 (0:00:00.108) 0:00:21.496 ******* 2025-03-26 16:50:14.140991 | orchestrator | ok: [testbed-node-3] => { 2025-03-26 16:50:14.141839 | orchestrator |  "vgs_report": { 2025-03-26 16:50:14.143333 | orchestrator |  "vg": [] 2025-03-26 16:50:14.144259 | orchestrator |  } 2025-03-26 16:50:14.144556 | orchestrator | } 2025-03-26 16:50:14.145699 | orchestrator | 2025-03-26 16:50:14.146415 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-03-26 16:50:14.147025 | orchestrator | Wednesday 26 March 2025 16:50:14 +0000 (0:00:00.176) 0:00:21.672 ******* 2025-03-26 16:50:14.295283 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:14.297689 | orchestrator | 2025-03-26 16:50:14.300805 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-03-26 16:50:14.301456 | orchestrator | Wednesday 26 March 2025 16:50:14 +0000 (0:00:00.152) 0:00:21.825 ******* 2025-03-26 16:50:14.467789 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:14.468270 | orchestrator | 2025-03-26 16:50:14.469280 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-03-26 16:50:14.469909 | orchestrator | Wednesday 26 March 2025 16:50:14 +0000 (0:00:00.172) 0:00:21.998 ******* 2025-03-26 16:50:14.636291 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:14.637852 | orchestrator | 2025-03-26 16:50:14.639022 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-03-26 16:50:14.639994 | orchestrator | Wednesday 26 March 2025 16:50:14 +0000 (0:00:00.165) 0:00:22.164 ******* 2025-03-26 16:50:15.033250 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:15.033456 | orchestrator | 2025-03-26 16:50:15.033922 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-03-26 16:50:15.034231 | orchestrator | Wednesday 26 March 2025 16:50:15 +0000 (0:00:00.400) 0:00:22.564 ******* 2025-03-26 16:50:15.190728 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:15.191449 | orchestrator | 2025-03-26 16:50:15.191489 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-03-26 16:50:15.191629 | orchestrator | Wednesday 26 March 2025 16:50:15 +0000 (0:00:00.156) 0:00:22.721 ******* 2025-03-26 16:50:15.355580 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:15.355694 | orchestrator | 2025-03-26 16:50:15.356047 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-03-26 16:50:15.356553 | orchestrator | Wednesday 26 March 2025 16:50:15 +0000 (0:00:00.164) 0:00:22.886 ******* 2025-03-26 16:50:15.532120 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:15.532431 | orchestrator | 2025-03-26 16:50:15.532471 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-03-26 16:50:15.533261 | orchestrator | Wednesday 26 March 2025 16:50:15 +0000 (0:00:00.176) 0:00:23.063 ******* 2025-03-26 16:50:15.686541 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:15.686925 | orchestrator | 2025-03-26 16:50:15.687274 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-03-26 16:50:15.687712 | orchestrator | Wednesday 26 March 2025 16:50:15 +0000 (0:00:00.150) 0:00:23.214 ******* 2025-03-26 16:50:15.833687 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:15.833915 | orchestrator | 2025-03-26 16:50:15.834135 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-03-26 16:50:15.834696 | orchestrator | Wednesday 26 March 2025 16:50:15 +0000 (0:00:00.150) 0:00:23.365 ******* 2025-03-26 16:50:15.981322 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:15.981546 | orchestrator | 2025-03-26 16:50:15.981574 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-03-26 16:50:16.117296 | orchestrator | Wednesday 26 March 2025 16:50:15 +0000 (0:00:00.146) 0:00:23.512 ******* 2025-03-26 16:50:16.117461 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:16.117871 | orchestrator | 2025-03-26 16:50:16.264708 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-03-26 16:50:16.264823 | orchestrator | Wednesday 26 March 2025 16:50:16 +0000 (0:00:00.136) 0:00:23.648 ******* 2025-03-26 16:50:16.264857 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:16.264940 | orchestrator | 2025-03-26 16:50:16.264967 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-03-26 16:50:16.265367 | orchestrator | Wednesday 26 March 2025 16:50:16 +0000 (0:00:00.147) 0:00:23.796 ******* 2025-03-26 16:50:16.436803 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:16.436960 | orchestrator | 2025-03-26 16:50:16.438014 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-03-26 16:50:16.438456 | orchestrator | Wednesday 26 March 2025 16:50:16 +0000 (0:00:00.171) 0:00:23.967 ******* 2025-03-26 16:50:16.590398 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:16.590614 | orchestrator | 2025-03-26 16:50:16.591505 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-03-26 16:50:16.591541 | orchestrator | Wednesday 26 March 2025 16:50:16 +0000 (0:00:00.153) 0:00:24.121 ******* 2025-03-26 16:50:16.778496 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5f4a1373-2d27-5995-abd5-5c6678505b20', 'data_vg': 'ceph-5f4a1373-2d27-5995-abd5-5c6678505b20'})  2025-03-26 16:50:16.778917 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dd5d4016-0a76-5f2c-8b69-aeca47fee476', 'data_vg': 'ceph-dd5d4016-0a76-5f2c-8b69-aeca47fee476'})  2025-03-26 16:50:16.779807 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:16.780576 | orchestrator | 2025-03-26 16:50:16.781015 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-03-26 16:50:16.782009 | orchestrator | Wednesday 26 March 2025 16:50:16 +0000 (0:00:00.188) 0:00:24.310 ******* 2025-03-26 16:50:17.196460 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5f4a1373-2d27-5995-abd5-5c6678505b20', 'data_vg': 'ceph-5f4a1373-2d27-5995-abd5-5c6678505b20'})  2025-03-26 16:50:17.197167 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dd5d4016-0a76-5f2c-8b69-aeca47fee476', 'data_vg': 'ceph-dd5d4016-0a76-5f2c-8b69-aeca47fee476'})  2025-03-26 16:50:17.197322 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:17.197437 | orchestrator | 2025-03-26 16:50:17.197757 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-03-26 16:50:17.198105 | orchestrator | Wednesday 26 March 2025 16:50:17 +0000 (0:00:00.417) 0:00:24.727 ******* 2025-03-26 16:50:17.374749 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5f4a1373-2d27-5995-abd5-5c6678505b20', 'data_vg': 'ceph-5f4a1373-2d27-5995-abd5-5c6678505b20'})  2025-03-26 16:50:17.376651 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dd5d4016-0a76-5f2c-8b69-aeca47fee476', 'data_vg': 'ceph-dd5d4016-0a76-5f2c-8b69-aeca47fee476'})  2025-03-26 16:50:17.377385 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:17.377421 | orchestrator | 2025-03-26 16:50:17.379134 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-03-26 16:50:17.379593 | orchestrator | Wednesday 26 March 2025 16:50:17 +0000 (0:00:00.177) 0:00:24.905 ******* 2025-03-26 16:50:17.563539 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5f4a1373-2d27-5995-abd5-5c6678505b20', 'data_vg': 'ceph-5f4a1373-2d27-5995-abd5-5c6678505b20'})  2025-03-26 16:50:17.563691 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dd5d4016-0a76-5f2c-8b69-aeca47fee476', 'data_vg': 'ceph-dd5d4016-0a76-5f2c-8b69-aeca47fee476'})  2025-03-26 16:50:17.564324 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:17.565437 | orchestrator | 2025-03-26 16:50:17.565726 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-03-26 16:50:17.566200 | orchestrator | Wednesday 26 March 2025 16:50:17 +0000 (0:00:00.189) 0:00:25.095 ******* 2025-03-26 16:50:17.737043 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5f4a1373-2d27-5995-abd5-5c6678505b20', 'data_vg': 'ceph-5f4a1373-2d27-5995-abd5-5c6678505b20'})  2025-03-26 16:50:17.737248 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dd5d4016-0a76-5f2c-8b69-aeca47fee476', 'data_vg': 'ceph-dd5d4016-0a76-5f2c-8b69-aeca47fee476'})  2025-03-26 16:50:17.737931 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:17.738696 | orchestrator | 2025-03-26 16:50:17.739488 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-03-26 16:50:17.740108 | orchestrator | Wednesday 26 March 2025 16:50:17 +0000 (0:00:00.170) 0:00:25.265 ******* 2025-03-26 16:50:17.924033 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5f4a1373-2d27-5995-abd5-5c6678505b20', 'data_vg': 'ceph-5f4a1373-2d27-5995-abd5-5c6678505b20'})  2025-03-26 16:50:17.924320 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dd5d4016-0a76-5f2c-8b69-aeca47fee476', 'data_vg': 'ceph-dd5d4016-0a76-5f2c-8b69-aeca47fee476'})  2025-03-26 16:50:17.925034 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:17.925614 | orchestrator | 2025-03-26 16:50:17.926103 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-03-26 16:50:17.926690 | orchestrator | Wednesday 26 March 2025 16:50:17 +0000 (0:00:00.188) 0:00:25.454 ******* 2025-03-26 16:50:18.105938 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5f4a1373-2d27-5995-abd5-5c6678505b20', 'data_vg': 'ceph-5f4a1373-2d27-5995-abd5-5c6678505b20'})  2025-03-26 16:50:18.107042 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dd5d4016-0a76-5f2c-8b69-aeca47fee476', 'data_vg': 'ceph-dd5d4016-0a76-5f2c-8b69-aeca47fee476'})  2025-03-26 16:50:18.107572 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:18.107979 | orchestrator | 2025-03-26 16:50:18.108563 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-03-26 16:50:18.109018 | orchestrator | Wednesday 26 March 2025 16:50:18 +0000 (0:00:00.183) 0:00:25.637 ******* 2025-03-26 16:50:18.291647 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5f4a1373-2d27-5995-abd5-5c6678505b20', 'data_vg': 'ceph-5f4a1373-2d27-5995-abd5-5c6678505b20'})  2025-03-26 16:50:18.293050 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dd5d4016-0a76-5f2c-8b69-aeca47fee476', 'data_vg': 'ceph-dd5d4016-0a76-5f2c-8b69-aeca47fee476'})  2025-03-26 16:50:18.293675 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:18.294285 | orchestrator | 2025-03-26 16:50:18.295186 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-03-26 16:50:18.295503 | orchestrator | Wednesday 26 March 2025 16:50:18 +0000 (0:00:00.185) 0:00:25.823 ******* 2025-03-26 16:50:18.841997 | orchestrator | ok: [testbed-node-3] 2025-03-26 16:50:18.842187 | orchestrator | 2025-03-26 16:50:18.842926 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-03-26 16:50:18.843583 | orchestrator | Wednesday 26 March 2025 16:50:18 +0000 (0:00:00.548) 0:00:26.371 ******* 2025-03-26 16:50:19.424308 | orchestrator | ok: [testbed-node-3] 2025-03-26 16:50:19.424748 | orchestrator | 2025-03-26 16:50:19.425547 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-03-26 16:50:19.426419 | orchestrator | Wednesday 26 March 2025 16:50:19 +0000 (0:00:00.582) 0:00:26.955 ******* 2025-03-26 16:50:19.577532 | orchestrator | ok: [testbed-node-3] 2025-03-26 16:50:19.577883 | orchestrator | 2025-03-26 16:50:19.577918 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-03-26 16:50:19.578462 | orchestrator | Wednesday 26 March 2025 16:50:19 +0000 (0:00:00.150) 0:00:27.105 ******* 2025-03-26 16:50:20.081484 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-5f4a1373-2d27-5995-abd5-5c6678505b20', 'vg_name': 'ceph-5f4a1373-2d27-5995-abd5-5c6678505b20'}) 2025-03-26 16:50:20.081659 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-dd5d4016-0a76-5f2c-8b69-aeca47fee476', 'vg_name': 'ceph-dd5d4016-0a76-5f2c-8b69-aeca47fee476'}) 2025-03-26 16:50:20.082254 | orchestrator | 2025-03-26 16:50:20.082390 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-03-26 16:50:20.082485 | orchestrator | Wednesday 26 March 2025 16:50:20 +0000 (0:00:00.507) 0:00:27.613 ******* 2025-03-26 16:50:20.270574 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5f4a1373-2d27-5995-abd5-5c6678505b20', 'data_vg': 'ceph-5f4a1373-2d27-5995-abd5-5c6678505b20'})  2025-03-26 16:50:20.270894 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dd5d4016-0a76-5f2c-8b69-aeca47fee476', 'data_vg': 'ceph-dd5d4016-0a76-5f2c-8b69-aeca47fee476'})  2025-03-26 16:50:20.272519 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:20.272902 | orchestrator | 2025-03-26 16:50:20.274199 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-03-26 16:50:20.274273 | orchestrator | Wednesday 26 March 2025 16:50:20 +0000 (0:00:00.189) 0:00:27.802 ******* 2025-03-26 16:50:20.457388 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5f4a1373-2d27-5995-abd5-5c6678505b20', 'data_vg': 'ceph-5f4a1373-2d27-5995-abd5-5c6678505b20'})  2025-03-26 16:50:20.458203 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dd5d4016-0a76-5f2c-8b69-aeca47fee476', 'data_vg': 'ceph-dd5d4016-0a76-5f2c-8b69-aeca47fee476'})  2025-03-26 16:50:20.458650 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:20.458975 | orchestrator | 2025-03-26 16:50:20.459452 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-03-26 16:50:20.460178 | orchestrator | Wednesday 26 March 2025 16:50:20 +0000 (0:00:00.184) 0:00:27.987 ******* 2025-03-26 16:50:20.636616 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5f4a1373-2d27-5995-abd5-5c6678505b20', 'data_vg': 'ceph-5f4a1373-2d27-5995-abd5-5c6678505b20'})  2025-03-26 16:50:20.637475 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dd5d4016-0a76-5f2c-8b69-aeca47fee476', 'data_vg': 'ceph-dd5d4016-0a76-5f2c-8b69-aeca47fee476'})  2025-03-26 16:50:20.639465 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:50:20.639550 | orchestrator | 2025-03-26 16:50:20.640452 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-03-26 16:50:20.641037 | orchestrator | Wednesday 26 March 2025 16:50:20 +0000 (0:00:00.179) 0:00:28.167 ******* 2025-03-26 16:50:21.724972 | orchestrator | ok: [testbed-node-3] => { 2025-03-26 16:50:21.725232 | orchestrator |  "lvm_report": { 2025-03-26 16:50:21.725265 | orchestrator |  "lv": [ 2025-03-26 16:50:21.726103 | orchestrator |  { 2025-03-26 16:50:21.727368 | orchestrator |  "lv_name": "osd-block-5f4a1373-2d27-5995-abd5-5c6678505b20", 2025-03-26 16:50:21.727472 | orchestrator |  "vg_name": "ceph-5f4a1373-2d27-5995-abd5-5c6678505b20" 2025-03-26 16:50:21.727984 | orchestrator |  }, 2025-03-26 16:50:21.729674 | orchestrator |  { 2025-03-26 16:50:21.730477 | orchestrator |  "lv_name": "osd-block-dd5d4016-0a76-5f2c-8b69-aeca47fee476", 2025-03-26 16:50:21.730550 | orchestrator |  "vg_name": "ceph-dd5d4016-0a76-5f2c-8b69-aeca47fee476" 2025-03-26 16:50:21.731260 | orchestrator |  } 2025-03-26 16:50:21.732114 | orchestrator |  ], 2025-03-26 16:50:21.732523 | orchestrator |  "pv": [ 2025-03-26 16:50:21.733297 | orchestrator |  { 2025-03-26 16:50:21.734871 | orchestrator |  "pv_name": "/dev/sdb", 2025-03-26 16:50:21.735048 | orchestrator |  "vg_name": "ceph-5f4a1373-2d27-5995-abd5-5c6678505b20" 2025-03-26 16:50:21.735166 | orchestrator |  }, 2025-03-26 16:50:21.735606 | orchestrator |  { 2025-03-26 16:50:21.736450 | orchestrator |  "pv_name": "/dev/sdc", 2025-03-26 16:50:21.736969 | orchestrator |  "vg_name": "ceph-dd5d4016-0a76-5f2c-8b69-aeca47fee476" 2025-03-26 16:50:21.737568 | orchestrator |  } 2025-03-26 16:50:21.738247 | orchestrator |  ] 2025-03-26 16:50:21.738464 | orchestrator |  } 2025-03-26 16:50:21.738712 | orchestrator | } 2025-03-26 16:50:21.739015 | orchestrator | 2025-03-26 16:50:21.739382 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-03-26 16:50:21.739482 | orchestrator | 2025-03-26 16:50:21.739932 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-03-26 16:50:21.740760 | orchestrator | Wednesday 26 March 2025 16:50:21 +0000 (0:00:01.088) 0:00:29.255 ******* 2025-03-26 16:50:22.075251 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-03-26 16:50:22.075457 | orchestrator | 2025-03-26 16:50:22.075656 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-03-26 16:50:22.076852 | orchestrator | Wednesday 26 March 2025 16:50:22 +0000 (0:00:00.351) 0:00:29.606 ******* 2025-03-26 16:50:22.359964 | orchestrator | ok: [testbed-node-4] 2025-03-26 16:50:22.360199 | orchestrator | 2025-03-26 16:50:22.360759 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 16:50:22.361112 | orchestrator | Wednesday 26 March 2025 16:50:22 +0000 (0:00:00.284) 0:00:29.891 ******* 2025-03-26 16:50:22.868384 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-03-26 16:50:22.869703 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-03-26 16:50:22.870966 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-03-26 16:50:22.872657 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-03-26 16:50:22.873032 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-03-26 16:50:22.873968 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-03-26 16:50:22.875451 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-03-26 16:50:22.875928 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-03-26 16:50:22.876862 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-03-26 16:50:22.877283 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-03-26 16:50:22.877767 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-03-26 16:50:22.878085 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-03-26 16:50:22.878490 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-03-26 16:50:22.879321 | orchestrator | 2025-03-26 16:50:22.879676 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 16:50:22.880258 | orchestrator | Wednesday 26 March 2025 16:50:22 +0000 (0:00:00.506) 0:00:30.398 ******* 2025-03-26 16:50:23.101969 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:23.102323 | orchestrator | 2025-03-26 16:50:23.102378 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 16:50:23.102737 | orchestrator | Wednesday 26 March 2025 16:50:23 +0000 (0:00:00.234) 0:00:30.633 ******* 2025-03-26 16:50:23.328619 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:23.329311 | orchestrator | 2025-03-26 16:50:23.329882 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 16:50:23.330555 | orchestrator | Wednesday 26 March 2025 16:50:23 +0000 (0:00:00.227) 0:00:30.860 ******* 2025-03-26 16:50:23.540123 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:23.540275 | orchestrator | 2025-03-26 16:50:23.540743 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 16:50:23.541390 | orchestrator | Wednesday 26 March 2025 16:50:23 +0000 (0:00:00.211) 0:00:31.071 ******* 2025-03-26 16:50:23.755803 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:23.755976 | orchestrator | 2025-03-26 16:50:23.756007 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 16:50:23.756502 | orchestrator | Wednesday 26 March 2025 16:50:23 +0000 (0:00:00.215) 0:00:31.287 ******* 2025-03-26 16:50:23.978946 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:23.979845 | orchestrator | 2025-03-26 16:50:23.979886 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 16:50:23.980502 | orchestrator | Wednesday 26 March 2025 16:50:23 +0000 (0:00:00.221) 0:00:31.508 ******* 2025-03-26 16:50:24.209581 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:24.209744 | orchestrator | 2025-03-26 16:50:24.209956 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 16:50:24.210809 | orchestrator | Wednesday 26 March 2025 16:50:24 +0000 (0:00:00.229) 0:00:31.738 ******* 2025-03-26 16:50:24.859272 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:24.859479 | orchestrator | 2025-03-26 16:50:24.859922 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 16:50:24.860732 | orchestrator | Wednesday 26 March 2025 16:50:24 +0000 (0:00:00.650) 0:00:32.388 ******* 2025-03-26 16:50:25.092898 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:25.093481 | orchestrator | 2025-03-26 16:50:25.094120 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 16:50:25.094799 | orchestrator | Wednesday 26 March 2025 16:50:25 +0000 (0:00:00.234) 0:00:32.623 ******* 2025-03-26 16:50:25.634212 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_16996a57-a85a-47ec-96d8-6a9835d4cef6) 2025-03-26 16:50:25.634593 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_16996a57-a85a-47ec-96d8-6a9835d4cef6) 2025-03-26 16:50:25.635132 | orchestrator | 2025-03-26 16:50:25.635517 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 16:50:25.636443 | orchestrator | Wednesday 26 March 2025 16:50:25 +0000 (0:00:00.541) 0:00:33.165 ******* 2025-03-26 16:50:26.152293 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_461b6804-a72b-4d73-873c-350a148be214) 2025-03-26 16:50:26.153153 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_461b6804-a72b-4d73-873c-350a148be214) 2025-03-26 16:50:26.153662 | orchestrator | 2025-03-26 16:50:26.154673 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 16:50:26.154744 | orchestrator | Wednesday 26 March 2025 16:50:26 +0000 (0:00:00.516) 0:00:33.681 ******* 2025-03-26 16:50:26.744543 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1f0ffd5a-0515-4e82-9e1a-64c4889ae37d) 2025-03-26 16:50:26.744683 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1f0ffd5a-0515-4e82-9e1a-64c4889ae37d) 2025-03-26 16:50:26.745196 | orchestrator | 2025-03-26 16:50:26.745522 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 16:50:26.746758 | orchestrator | Wednesday 26 March 2025 16:50:26 +0000 (0:00:00.593) 0:00:34.275 ******* 2025-03-26 16:50:27.325563 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_3084e68a-ebad-4437-bd94-82607113cb35) 2025-03-26 16:50:27.325882 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_3084e68a-ebad-4437-bd94-82607113cb35) 2025-03-26 16:50:27.326184 | orchestrator | 2025-03-26 16:50:27.326789 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 16:50:27.327118 | orchestrator | Wednesday 26 March 2025 16:50:27 +0000 (0:00:00.580) 0:00:34.855 ******* 2025-03-26 16:50:27.676170 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-03-26 16:50:27.677269 | orchestrator | 2025-03-26 16:50:27.677363 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 16:50:27.678313 | orchestrator | Wednesday 26 March 2025 16:50:27 +0000 (0:00:00.351) 0:00:35.207 ******* 2025-03-26 16:50:28.235820 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-03-26 16:50:28.236303 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-03-26 16:50:28.237369 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-03-26 16:50:28.237834 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-03-26 16:50:28.239069 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-03-26 16:50:28.239922 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-03-26 16:50:28.240346 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-03-26 16:50:28.240858 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-03-26 16:50:28.241410 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-03-26 16:50:28.242414 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-03-26 16:50:28.242773 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-03-26 16:50:28.242982 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-03-26 16:50:28.243475 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-03-26 16:50:28.243902 | orchestrator | 2025-03-26 16:50:28.244478 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 16:50:28.244717 | orchestrator | Wednesday 26 March 2025 16:50:28 +0000 (0:00:00.557) 0:00:35.765 ******* 2025-03-26 16:50:28.879523 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:28.880368 | orchestrator | 2025-03-26 16:50:28.880913 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 16:50:28.881740 | orchestrator | Wednesday 26 March 2025 16:50:28 +0000 (0:00:00.641) 0:00:36.407 ******* 2025-03-26 16:50:29.114161 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:29.114525 | orchestrator | 2025-03-26 16:50:29.114584 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 16:50:29.115152 | orchestrator | Wednesday 26 March 2025 16:50:29 +0000 (0:00:00.236) 0:00:36.643 ******* 2025-03-26 16:50:29.346636 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:29.347094 | orchestrator | 2025-03-26 16:50:29.347128 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 16:50:29.347396 | orchestrator | Wednesday 26 March 2025 16:50:29 +0000 (0:00:00.234) 0:00:36.877 ******* 2025-03-26 16:50:29.571779 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:29.573367 | orchestrator | 2025-03-26 16:50:29.573407 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 16:50:29.573431 | orchestrator | Wednesday 26 March 2025 16:50:29 +0000 (0:00:00.223) 0:00:37.101 ******* 2025-03-26 16:50:29.790396 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:29.793033 | orchestrator | 2025-03-26 16:50:29.796113 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 16:50:29.797264 | orchestrator | Wednesday 26 March 2025 16:50:29 +0000 (0:00:00.218) 0:00:37.319 ******* 2025-03-26 16:50:30.057779 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:30.058263 | orchestrator | 2025-03-26 16:50:30.058297 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 16:50:30.062093 | orchestrator | Wednesday 26 March 2025 16:50:30 +0000 (0:00:00.267) 0:00:37.586 ******* 2025-03-26 16:50:30.294061 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:30.294204 | orchestrator | 2025-03-26 16:50:30.295100 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 16:50:30.295679 | orchestrator | Wednesday 26 March 2025 16:50:30 +0000 (0:00:00.236) 0:00:37.823 ******* 2025-03-26 16:50:30.491259 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:30.491399 | orchestrator | 2025-03-26 16:50:30.492281 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 16:50:30.492766 | orchestrator | Wednesday 26 March 2025 16:50:30 +0000 (0:00:00.198) 0:00:38.021 ******* 2025-03-26 16:50:31.485408 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-03-26 16:50:31.485612 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-03-26 16:50:31.485647 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-03-26 16:50:31.486955 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-03-26 16:50:31.487053 | orchestrator | 2025-03-26 16:50:31.487931 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 16:50:31.488559 | orchestrator | Wednesday 26 March 2025 16:50:31 +0000 (0:00:00.993) 0:00:39.014 ******* 2025-03-26 16:50:31.692817 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:31.693896 | orchestrator | 2025-03-26 16:50:31.693929 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 16:50:31.694646 | orchestrator | Wednesday 26 March 2025 16:50:31 +0000 (0:00:00.208) 0:00:39.223 ******* 2025-03-26 16:50:31.899223 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:31.900303 | orchestrator | 2025-03-26 16:50:31.900905 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 16:50:31.902742 | orchestrator | Wednesday 26 March 2025 16:50:31 +0000 (0:00:00.206) 0:00:39.430 ******* 2025-03-26 16:50:32.411546 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:32.412143 | orchestrator | 2025-03-26 16:50:32.412211 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 16:50:32.412851 | orchestrator | Wednesday 26 March 2025 16:50:32 +0000 (0:00:00.513) 0:00:39.943 ******* 2025-03-26 16:50:32.640696 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:32.641510 | orchestrator | 2025-03-26 16:50:32.642103 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-03-26 16:50:32.643975 | orchestrator | Wednesday 26 March 2025 16:50:32 +0000 (0:00:00.228) 0:00:40.171 ******* 2025-03-26 16:50:32.775186 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:32.775452 | orchestrator | 2025-03-26 16:50:32.775638 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-03-26 16:50:32.776160 | orchestrator | Wednesday 26 March 2025 16:50:32 +0000 (0:00:00.135) 0:00:40.306 ******* 2025-03-26 16:50:33.006727 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3792eb61-f724-5898-a357-e4730d4e9a9e'}}) 2025-03-26 16:50:33.007009 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e5825628-1db3-5928-b5a4-d100b751e871'}}) 2025-03-26 16:50:33.007045 | orchestrator | 2025-03-26 16:50:33.009563 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-03-26 16:50:33.011799 | orchestrator | Wednesday 26 March 2025 16:50:33 +0000 (0:00:00.229) 0:00:40.536 ******* 2025-03-26 16:50:35.226828 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-3792eb61-f724-5898-a357-e4730d4e9a9e', 'data_vg': 'ceph-3792eb61-f724-5898-a357-e4730d4e9a9e'}) 2025-03-26 16:50:35.227243 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-e5825628-1db3-5928-b5a4-d100b751e871', 'data_vg': 'ceph-e5825628-1db3-5928-b5a4-d100b751e871'}) 2025-03-26 16:50:35.227570 | orchestrator | 2025-03-26 16:50:35.229610 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-03-26 16:50:35.413358 | orchestrator | Wednesday 26 March 2025 16:50:35 +0000 (0:00:02.218) 0:00:42.754 ******* 2025-03-26 16:50:35.413475 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3792eb61-f724-5898-a357-e4730d4e9a9e', 'data_vg': 'ceph-3792eb61-f724-5898-a357-e4730d4e9a9e'})  2025-03-26 16:50:35.414070 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e5825628-1db3-5928-b5a4-d100b751e871', 'data_vg': 'ceph-e5825628-1db3-5928-b5a4-d100b751e871'})  2025-03-26 16:50:35.414539 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:35.414567 | orchestrator | 2025-03-26 16:50:35.415187 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-03-26 16:50:35.415529 | orchestrator | Wednesday 26 March 2025 16:50:35 +0000 (0:00:00.189) 0:00:42.944 ******* 2025-03-26 16:50:36.784110 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-3792eb61-f724-5898-a357-e4730d4e9a9e', 'data_vg': 'ceph-3792eb61-f724-5898-a357-e4730d4e9a9e'}) 2025-03-26 16:50:36.784291 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-e5825628-1db3-5928-b5a4-d100b751e871', 'data_vg': 'ceph-e5825628-1db3-5928-b5a4-d100b751e871'}) 2025-03-26 16:50:36.784844 | orchestrator | 2025-03-26 16:50:36.785006 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-03-26 16:50:36.785488 | orchestrator | Wednesday 26 March 2025 16:50:36 +0000 (0:00:01.369) 0:00:44.313 ******* 2025-03-26 16:50:36.964071 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3792eb61-f724-5898-a357-e4730d4e9a9e', 'data_vg': 'ceph-3792eb61-f724-5898-a357-e4730d4e9a9e'})  2025-03-26 16:50:36.964205 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e5825628-1db3-5928-b5a4-d100b751e871', 'data_vg': 'ceph-e5825628-1db3-5928-b5a4-d100b751e871'})  2025-03-26 16:50:36.964938 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:36.965226 | orchestrator | 2025-03-26 16:50:36.965743 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-03-26 16:50:36.965885 | orchestrator | Wednesday 26 March 2025 16:50:36 +0000 (0:00:00.181) 0:00:44.495 ******* 2025-03-26 16:50:37.365639 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:37.366705 | orchestrator | 2025-03-26 16:50:37.551387 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-03-26 16:50:37.551473 | orchestrator | Wednesday 26 March 2025 16:50:37 +0000 (0:00:00.398) 0:00:44.894 ******* 2025-03-26 16:50:37.551502 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3792eb61-f724-5898-a357-e4730d4e9a9e', 'data_vg': 'ceph-3792eb61-f724-5898-a357-e4730d4e9a9e'})  2025-03-26 16:50:37.551806 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e5825628-1db3-5928-b5a4-d100b751e871', 'data_vg': 'ceph-e5825628-1db3-5928-b5a4-d100b751e871'})  2025-03-26 16:50:37.552610 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:37.553402 | orchestrator | 2025-03-26 16:50:37.553889 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-03-26 16:50:37.554949 | orchestrator | Wednesday 26 March 2025 16:50:37 +0000 (0:00:00.187) 0:00:45.081 ******* 2025-03-26 16:50:37.710173 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:37.710297 | orchestrator | 2025-03-26 16:50:37.711714 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-03-26 16:50:37.712561 | orchestrator | Wednesday 26 March 2025 16:50:37 +0000 (0:00:00.159) 0:00:45.241 ******* 2025-03-26 16:50:37.955792 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3792eb61-f724-5898-a357-e4730d4e9a9e', 'data_vg': 'ceph-3792eb61-f724-5898-a357-e4730d4e9a9e'})  2025-03-26 16:50:37.955930 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e5825628-1db3-5928-b5a4-d100b751e871', 'data_vg': 'ceph-e5825628-1db3-5928-b5a4-d100b751e871'})  2025-03-26 16:50:37.956924 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:37.957812 | orchestrator | 2025-03-26 16:50:37.958210 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-03-26 16:50:37.959023 | orchestrator | Wednesday 26 March 2025 16:50:37 +0000 (0:00:00.245) 0:00:45.487 ******* 2025-03-26 16:50:38.104932 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:38.105098 | orchestrator | 2025-03-26 16:50:38.106181 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-03-26 16:50:38.107860 | orchestrator | Wednesday 26 March 2025 16:50:38 +0000 (0:00:00.149) 0:00:45.636 ******* 2025-03-26 16:50:38.285673 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3792eb61-f724-5898-a357-e4730d4e9a9e', 'data_vg': 'ceph-3792eb61-f724-5898-a357-e4730d4e9a9e'})  2025-03-26 16:50:38.286172 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e5825628-1db3-5928-b5a4-d100b751e871', 'data_vg': 'ceph-e5825628-1db3-5928-b5a4-d100b751e871'})  2025-03-26 16:50:38.286871 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:38.287603 | orchestrator | 2025-03-26 16:50:38.288110 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-03-26 16:50:38.288491 | orchestrator | Wednesday 26 March 2025 16:50:38 +0000 (0:00:00.180) 0:00:45.817 ******* 2025-03-26 16:50:38.444558 | orchestrator | ok: [testbed-node-4] 2025-03-26 16:50:38.444841 | orchestrator | 2025-03-26 16:50:38.445806 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-03-26 16:50:38.446536 | orchestrator | Wednesday 26 March 2025 16:50:38 +0000 (0:00:00.157) 0:00:45.974 ******* 2025-03-26 16:50:38.630205 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3792eb61-f724-5898-a357-e4730d4e9a9e', 'data_vg': 'ceph-3792eb61-f724-5898-a357-e4730d4e9a9e'})  2025-03-26 16:50:38.630765 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e5825628-1db3-5928-b5a4-d100b751e871', 'data_vg': 'ceph-e5825628-1db3-5928-b5a4-d100b751e871'})  2025-03-26 16:50:38.631265 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:38.632233 | orchestrator | 2025-03-26 16:50:38.633084 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-03-26 16:50:38.634105 | orchestrator | Wednesday 26 March 2025 16:50:38 +0000 (0:00:00.185) 0:00:46.160 ******* 2025-03-26 16:50:38.806540 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3792eb61-f724-5898-a357-e4730d4e9a9e', 'data_vg': 'ceph-3792eb61-f724-5898-a357-e4730d4e9a9e'})  2025-03-26 16:50:38.807118 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e5825628-1db3-5928-b5a4-d100b751e871', 'data_vg': 'ceph-e5825628-1db3-5928-b5a4-d100b751e871'})  2025-03-26 16:50:38.808126 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:38.808591 | orchestrator | 2025-03-26 16:50:38.809513 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-03-26 16:50:38.809976 | orchestrator | Wednesday 26 March 2025 16:50:38 +0000 (0:00:00.177) 0:00:46.338 ******* 2025-03-26 16:50:38.984995 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3792eb61-f724-5898-a357-e4730d4e9a9e', 'data_vg': 'ceph-3792eb61-f724-5898-a357-e4730d4e9a9e'})  2025-03-26 16:50:38.985194 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e5825628-1db3-5928-b5a4-d100b751e871', 'data_vg': 'ceph-e5825628-1db3-5928-b5a4-d100b751e871'})  2025-03-26 16:50:38.985503 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:38.986139 | orchestrator | 2025-03-26 16:50:38.986662 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-03-26 16:50:38.987393 | orchestrator | Wednesday 26 March 2025 16:50:38 +0000 (0:00:00.178) 0:00:46.516 ******* 2025-03-26 16:50:39.138667 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:39.139396 | orchestrator | 2025-03-26 16:50:39.140985 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-03-26 16:50:39.143765 | orchestrator | Wednesday 26 March 2025 16:50:39 +0000 (0:00:00.152) 0:00:46.669 ******* 2025-03-26 16:50:39.287501 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:39.288495 | orchestrator | 2025-03-26 16:50:39.288926 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-03-26 16:50:39.289598 | orchestrator | Wednesday 26 March 2025 16:50:39 +0000 (0:00:00.149) 0:00:46.818 ******* 2025-03-26 16:50:39.669509 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:39.670130 | orchestrator | 2025-03-26 16:50:39.671771 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-03-26 16:50:39.672473 | orchestrator | Wednesday 26 March 2025 16:50:39 +0000 (0:00:00.380) 0:00:47.199 ******* 2025-03-26 16:50:39.844558 | orchestrator | ok: [testbed-node-4] => { 2025-03-26 16:50:39.845730 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-03-26 16:50:39.845765 | orchestrator | } 2025-03-26 16:50:39.846291 | orchestrator | 2025-03-26 16:50:39.847909 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-03-26 16:50:39.848606 | orchestrator | Wednesday 26 March 2025 16:50:39 +0000 (0:00:00.175) 0:00:47.374 ******* 2025-03-26 16:50:39.997728 | orchestrator | ok: [testbed-node-4] => { 2025-03-26 16:50:39.997845 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-03-26 16:50:39.999333 | orchestrator | } 2025-03-26 16:50:40.000341 | orchestrator | 2025-03-26 16:50:40.001075 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-03-26 16:50:40.001734 | orchestrator | Wednesday 26 March 2025 16:50:39 +0000 (0:00:00.154) 0:00:47.528 ******* 2025-03-26 16:50:40.154284 | orchestrator | ok: [testbed-node-4] => { 2025-03-26 16:50:40.154461 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-03-26 16:50:40.155470 | orchestrator | } 2025-03-26 16:50:40.156078 | orchestrator | 2025-03-26 16:50:40.156655 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-03-26 16:50:40.158461 | orchestrator | Wednesday 26 March 2025 16:50:40 +0000 (0:00:00.156) 0:00:47.685 ******* 2025-03-26 16:50:40.730178 | orchestrator | ok: [testbed-node-4] 2025-03-26 16:50:40.730547 | orchestrator | 2025-03-26 16:50:40.731109 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-03-26 16:50:40.732183 | orchestrator | Wednesday 26 March 2025 16:50:40 +0000 (0:00:00.573) 0:00:48.259 ******* 2025-03-26 16:50:41.319131 | orchestrator | ok: [testbed-node-4] 2025-03-26 16:50:41.321021 | orchestrator | 2025-03-26 16:50:41.324079 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-03-26 16:50:41.899045 | orchestrator | Wednesday 26 March 2025 16:50:41 +0000 (0:00:00.583) 0:00:48.843 ******* 2025-03-26 16:50:41.899179 | orchestrator | ok: [testbed-node-4] 2025-03-26 16:50:41.899575 | orchestrator | 2025-03-26 16:50:41.900434 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-03-26 16:50:41.900496 | orchestrator | Wednesday 26 March 2025 16:50:41 +0000 (0:00:00.585) 0:00:49.428 ******* 2025-03-26 16:50:42.051271 | orchestrator | ok: [testbed-node-4] 2025-03-26 16:50:42.052309 | orchestrator | 2025-03-26 16:50:42.052716 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-03-26 16:50:42.053435 | orchestrator | Wednesday 26 March 2025 16:50:42 +0000 (0:00:00.153) 0:00:49.581 ******* 2025-03-26 16:50:42.193247 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:42.194083 | orchestrator | 2025-03-26 16:50:42.194405 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-03-26 16:50:42.194439 | orchestrator | Wednesday 26 March 2025 16:50:42 +0000 (0:00:00.142) 0:00:49.724 ******* 2025-03-26 16:50:42.337264 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:42.337652 | orchestrator | 2025-03-26 16:50:42.338135 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-03-26 16:50:42.338471 | orchestrator | Wednesday 26 March 2025 16:50:42 +0000 (0:00:00.145) 0:00:49.869 ******* 2025-03-26 16:50:42.810905 | orchestrator | ok: [testbed-node-4] => { 2025-03-26 16:50:42.811700 | orchestrator |  "vgs_report": { 2025-03-26 16:50:42.811770 | orchestrator |  "vg": [] 2025-03-26 16:50:42.812601 | orchestrator |  } 2025-03-26 16:50:42.812930 | orchestrator | } 2025-03-26 16:50:42.813894 | orchestrator | 2025-03-26 16:50:42.814692 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-03-26 16:50:42.814726 | orchestrator | Wednesday 26 March 2025 16:50:42 +0000 (0:00:00.471) 0:00:50.340 ******* 2025-03-26 16:50:42.964977 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:42.965574 | orchestrator | 2025-03-26 16:50:42.965608 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-03-26 16:50:42.965773 | orchestrator | Wednesday 26 March 2025 16:50:42 +0000 (0:00:00.155) 0:00:50.496 ******* 2025-03-26 16:50:43.113583 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:43.114139 | orchestrator | 2025-03-26 16:50:43.114173 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-03-26 16:50:43.114613 | orchestrator | Wednesday 26 March 2025 16:50:43 +0000 (0:00:00.148) 0:00:50.645 ******* 2025-03-26 16:50:43.254246 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:43.255372 | orchestrator | 2025-03-26 16:50:43.255713 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-03-26 16:50:43.256534 | orchestrator | Wednesday 26 March 2025 16:50:43 +0000 (0:00:00.139) 0:00:50.785 ******* 2025-03-26 16:50:43.410772 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:43.411575 | orchestrator | 2025-03-26 16:50:43.411610 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-03-26 16:50:43.411686 | orchestrator | Wednesday 26 March 2025 16:50:43 +0000 (0:00:00.156) 0:00:50.941 ******* 2025-03-26 16:50:43.562094 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:43.562201 | orchestrator | 2025-03-26 16:50:43.562639 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-03-26 16:50:43.563308 | orchestrator | Wednesday 26 March 2025 16:50:43 +0000 (0:00:00.152) 0:00:51.093 ******* 2025-03-26 16:50:43.697546 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:43.697666 | orchestrator | 2025-03-26 16:50:43.697747 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-03-26 16:50:43.698580 | orchestrator | Wednesday 26 March 2025 16:50:43 +0000 (0:00:00.134) 0:00:51.228 ******* 2025-03-26 16:50:43.832236 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:43.833000 | orchestrator | 2025-03-26 16:50:43.833033 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-03-26 16:50:43.990002 | orchestrator | Wednesday 26 March 2025 16:50:43 +0000 (0:00:00.135) 0:00:51.363 ******* 2025-03-26 16:50:43.990111 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:43.991181 | orchestrator | 2025-03-26 16:50:43.991213 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-03-26 16:50:43.991567 | orchestrator | Wednesday 26 March 2025 16:50:43 +0000 (0:00:00.157) 0:00:51.520 ******* 2025-03-26 16:50:44.139247 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:44.140499 | orchestrator | 2025-03-26 16:50:44.140540 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-03-26 16:50:44.305633 | orchestrator | Wednesday 26 March 2025 16:50:44 +0000 (0:00:00.149) 0:00:51.670 ******* 2025-03-26 16:50:44.305738 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:44.306086 | orchestrator | 2025-03-26 16:50:44.306343 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-03-26 16:50:44.307071 | orchestrator | Wednesday 26 March 2025 16:50:44 +0000 (0:00:00.164) 0:00:51.835 ******* 2025-03-26 16:50:44.482604 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:44.483142 | orchestrator | 2025-03-26 16:50:44.484099 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-03-26 16:50:44.484324 | orchestrator | Wednesday 26 March 2025 16:50:44 +0000 (0:00:00.176) 0:00:52.011 ******* 2025-03-26 16:50:44.879891 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:44.880073 | orchestrator | 2025-03-26 16:50:44.880765 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-03-26 16:50:44.881192 | orchestrator | Wednesday 26 March 2025 16:50:44 +0000 (0:00:00.398) 0:00:52.410 ******* 2025-03-26 16:50:45.067801 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:45.068117 | orchestrator | 2025-03-26 16:50:45.068578 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-03-26 16:50:45.069426 | orchestrator | Wednesday 26 March 2025 16:50:45 +0000 (0:00:00.188) 0:00:52.599 ******* 2025-03-26 16:50:45.230304 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:45.230629 | orchestrator | 2025-03-26 16:50:45.230664 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-03-26 16:50:45.231090 | orchestrator | Wednesday 26 March 2025 16:50:45 +0000 (0:00:00.161) 0:00:52.760 ******* 2025-03-26 16:50:45.439058 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3792eb61-f724-5898-a357-e4730d4e9a9e', 'data_vg': 'ceph-3792eb61-f724-5898-a357-e4730d4e9a9e'})  2025-03-26 16:50:45.439745 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e5825628-1db3-5928-b5a4-d100b751e871', 'data_vg': 'ceph-e5825628-1db3-5928-b5a4-d100b751e871'})  2025-03-26 16:50:45.440162 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:45.440761 | orchestrator | 2025-03-26 16:50:45.441504 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-03-26 16:50:45.441988 | orchestrator | Wednesday 26 March 2025 16:50:45 +0000 (0:00:00.208) 0:00:52.969 ******* 2025-03-26 16:50:45.618812 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3792eb61-f724-5898-a357-e4730d4e9a9e', 'data_vg': 'ceph-3792eb61-f724-5898-a357-e4730d4e9a9e'})  2025-03-26 16:50:45.619018 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e5825628-1db3-5928-b5a4-d100b751e871', 'data_vg': 'ceph-e5825628-1db3-5928-b5a4-d100b751e871'})  2025-03-26 16:50:45.619935 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:45.620400 | orchestrator | 2025-03-26 16:50:45.621021 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-03-26 16:50:45.621352 | orchestrator | Wednesday 26 March 2025 16:50:45 +0000 (0:00:00.181) 0:00:53.150 ******* 2025-03-26 16:50:45.795430 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3792eb61-f724-5898-a357-e4730d4e9a9e', 'data_vg': 'ceph-3792eb61-f724-5898-a357-e4730d4e9a9e'})  2025-03-26 16:50:45.796379 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e5825628-1db3-5928-b5a4-d100b751e871', 'data_vg': 'ceph-e5825628-1db3-5928-b5a4-d100b751e871'})  2025-03-26 16:50:45.796761 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:45.797453 | orchestrator | 2025-03-26 16:50:45.797818 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-03-26 16:50:45.798492 | orchestrator | Wednesday 26 March 2025 16:50:45 +0000 (0:00:00.175) 0:00:53.326 ******* 2025-03-26 16:50:45.973859 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3792eb61-f724-5898-a357-e4730d4e9a9e', 'data_vg': 'ceph-3792eb61-f724-5898-a357-e4730d4e9a9e'})  2025-03-26 16:50:45.974746 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e5825628-1db3-5928-b5a4-d100b751e871', 'data_vg': 'ceph-e5825628-1db3-5928-b5a4-d100b751e871'})  2025-03-26 16:50:45.975420 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:45.976170 | orchestrator | 2025-03-26 16:50:45.977128 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-03-26 16:50:45.977508 | orchestrator | Wednesday 26 March 2025 16:50:45 +0000 (0:00:00.178) 0:00:53.504 ******* 2025-03-26 16:50:46.183313 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3792eb61-f724-5898-a357-e4730d4e9a9e', 'data_vg': 'ceph-3792eb61-f724-5898-a357-e4730d4e9a9e'})  2025-03-26 16:50:46.183732 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e5825628-1db3-5928-b5a4-d100b751e871', 'data_vg': 'ceph-e5825628-1db3-5928-b5a4-d100b751e871'})  2025-03-26 16:50:46.184452 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:46.185116 | orchestrator | 2025-03-26 16:50:46.186176 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-03-26 16:50:46.186558 | orchestrator | Wednesday 26 March 2025 16:50:46 +0000 (0:00:00.209) 0:00:53.714 ******* 2025-03-26 16:50:46.392430 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3792eb61-f724-5898-a357-e4730d4e9a9e', 'data_vg': 'ceph-3792eb61-f724-5898-a357-e4730d4e9a9e'})  2025-03-26 16:50:46.392612 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e5825628-1db3-5928-b5a4-d100b751e871', 'data_vg': 'ceph-e5825628-1db3-5928-b5a4-d100b751e871'})  2025-03-26 16:50:46.392672 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:46.393147 | orchestrator | 2025-03-26 16:50:46.393604 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-03-26 16:50:46.393635 | orchestrator | Wednesday 26 March 2025 16:50:46 +0000 (0:00:00.207) 0:00:53.921 ******* 2025-03-26 16:50:46.584922 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3792eb61-f724-5898-a357-e4730d4e9a9e', 'data_vg': 'ceph-3792eb61-f724-5898-a357-e4730d4e9a9e'})  2025-03-26 16:50:46.585634 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e5825628-1db3-5928-b5a4-d100b751e871', 'data_vg': 'ceph-e5825628-1db3-5928-b5a4-d100b751e871'})  2025-03-26 16:50:46.585666 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:46.586194 | orchestrator | 2025-03-26 16:50:46.586865 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-03-26 16:50:46.587184 | orchestrator | Wednesday 26 March 2025 16:50:46 +0000 (0:00:00.192) 0:00:54.114 ******* 2025-03-26 16:50:46.754655 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3792eb61-f724-5898-a357-e4730d4e9a9e', 'data_vg': 'ceph-3792eb61-f724-5898-a357-e4730d4e9a9e'})  2025-03-26 16:50:46.755132 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e5825628-1db3-5928-b5a4-d100b751e871', 'data_vg': 'ceph-e5825628-1db3-5928-b5a4-d100b751e871'})  2025-03-26 16:50:46.755174 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:46.755582 | orchestrator | 2025-03-26 16:50:46.756431 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-03-26 16:50:46.756892 | orchestrator | Wednesday 26 March 2025 16:50:46 +0000 (0:00:00.170) 0:00:54.284 ******* 2025-03-26 16:50:47.544650 | orchestrator | ok: [testbed-node-4] 2025-03-26 16:50:47.545155 | orchestrator | 2025-03-26 16:50:47.545987 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-03-26 16:50:47.546436 | orchestrator | Wednesday 26 March 2025 16:50:47 +0000 (0:00:00.789) 0:00:55.074 ******* 2025-03-26 16:50:48.093861 | orchestrator | ok: [testbed-node-4] 2025-03-26 16:50:48.094079 | orchestrator | 2025-03-26 16:50:48.094113 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-03-26 16:50:48.094594 | orchestrator | Wednesday 26 March 2025 16:50:48 +0000 (0:00:00.550) 0:00:55.624 ******* 2025-03-26 16:50:48.263268 | orchestrator | ok: [testbed-node-4] 2025-03-26 16:50:48.263742 | orchestrator | 2025-03-26 16:50:48.263784 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-03-26 16:50:48.264016 | orchestrator | Wednesday 26 March 2025 16:50:48 +0000 (0:00:00.166) 0:00:55.791 ******* 2025-03-26 16:50:48.458956 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-3792eb61-f724-5898-a357-e4730d4e9a9e', 'vg_name': 'ceph-3792eb61-f724-5898-a357-e4730d4e9a9e'}) 2025-03-26 16:50:48.459687 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-e5825628-1db3-5928-b5a4-d100b751e871', 'vg_name': 'ceph-e5825628-1db3-5928-b5a4-d100b751e871'}) 2025-03-26 16:50:48.459732 | orchestrator | 2025-03-26 16:50:48.460751 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-03-26 16:50:48.460983 | orchestrator | Wednesday 26 March 2025 16:50:48 +0000 (0:00:00.198) 0:00:55.990 ******* 2025-03-26 16:50:48.646206 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3792eb61-f724-5898-a357-e4730d4e9a9e', 'data_vg': 'ceph-3792eb61-f724-5898-a357-e4730d4e9a9e'})  2025-03-26 16:50:48.646660 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e5825628-1db3-5928-b5a4-d100b751e871', 'data_vg': 'ceph-e5825628-1db3-5928-b5a4-d100b751e871'})  2025-03-26 16:50:48.646957 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:48.647490 | orchestrator | 2025-03-26 16:50:48.647851 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-03-26 16:50:48.648216 | orchestrator | Wednesday 26 March 2025 16:50:48 +0000 (0:00:00.187) 0:00:56.177 ******* 2025-03-26 16:50:48.831199 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3792eb61-f724-5898-a357-e4730d4e9a9e', 'data_vg': 'ceph-3792eb61-f724-5898-a357-e4730d4e9a9e'})  2025-03-26 16:50:48.831472 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e5825628-1db3-5928-b5a4-d100b751e871', 'data_vg': 'ceph-e5825628-1db3-5928-b5a4-d100b751e871'})  2025-03-26 16:50:48.832280 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:48.832578 | orchestrator | 2025-03-26 16:50:48.833110 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-03-26 16:50:48.833734 | orchestrator | Wednesday 26 March 2025 16:50:48 +0000 (0:00:00.185) 0:00:56.362 ******* 2025-03-26 16:50:49.035318 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3792eb61-f724-5898-a357-e4730d4e9a9e', 'data_vg': 'ceph-3792eb61-f724-5898-a357-e4730d4e9a9e'})  2025-03-26 16:50:49.037022 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e5825628-1db3-5928-b5a4-d100b751e871', 'data_vg': 'ceph-e5825628-1db3-5928-b5a4-d100b751e871'})  2025-03-26 16:50:49.038150 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:50:49.039038 | orchestrator | 2025-03-26 16:50:49.040176 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-03-26 16:50:49.040963 | orchestrator | Wednesday 26 March 2025 16:50:49 +0000 (0:00:00.202) 0:00:56.565 ******* 2025-03-26 16:50:50.101610 | orchestrator | ok: [testbed-node-4] => { 2025-03-26 16:50:50.102809 | orchestrator |  "lvm_report": { 2025-03-26 16:50:50.102854 | orchestrator |  "lv": [ 2025-03-26 16:50:50.103116 | orchestrator |  { 2025-03-26 16:50:50.106428 | orchestrator |  "lv_name": "osd-block-3792eb61-f724-5898-a357-e4730d4e9a9e", 2025-03-26 16:50:50.111456 | orchestrator |  "vg_name": "ceph-3792eb61-f724-5898-a357-e4730d4e9a9e" 2025-03-26 16:50:50.111871 | orchestrator |  }, 2025-03-26 16:50:50.112807 | orchestrator |  { 2025-03-26 16:50:50.113917 | orchestrator |  "lv_name": "osd-block-e5825628-1db3-5928-b5a4-d100b751e871", 2025-03-26 16:50:50.115805 | orchestrator |  "vg_name": "ceph-e5825628-1db3-5928-b5a4-d100b751e871" 2025-03-26 16:50:50.116033 | orchestrator |  } 2025-03-26 16:50:50.116875 | orchestrator |  ], 2025-03-26 16:50:50.119687 | orchestrator |  "pv": [ 2025-03-26 16:50:50.120196 | orchestrator |  { 2025-03-26 16:50:50.120642 | orchestrator |  "pv_name": "/dev/sdb", 2025-03-26 16:50:50.121672 | orchestrator |  "vg_name": "ceph-3792eb61-f724-5898-a357-e4730d4e9a9e" 2025-03-26 16:50:50.123174 | orchestrator |  }, 2025-03-26 16:50:50.123202 | orchestrator |  { 2025-03-26 16:50:50.123222 | orchestrator |  "pv_name": "/dev/sdc", 2025-03-26 16:50:50.123688 | orchestrator |  "vg_name": "ceph-e5825628-1db3-5928-b5a4-d100b751e871" 2025-03-26 16:50:50.123812 | orchestrator |  } 2025-03-26 16:50:50.123917 | orchestrator |  ] 2025-03-26 16:50:50.124178 | orchestrator |  } 2025-03-26 16:50:50.124988 | orchestrator | } 2025-03-26 16:50:50.128051 | orchestrator | 2025-03-26 16:50:50.130154 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-03-26 16:50:50.130518 | orchestrator | 2025-03-26 16:50:50.130955 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-03-26 16:50:50.131407 | orchestrator | Wednesday 26 March 2025 16:50:50 +0000 (0:00:01.065) 0:00:57.630 ******* 2025-03-26 16:50:50.421757 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-03-26 16:50:50.422980 | orchestrator | 2025-03-26 16:50:50.423022 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-03-26 16:50:50.423047 | orchestrator | Wednesday 26 March 2025 16:50:50 +0000 (0:00:00.321) 0:00:57.952 ******* 2025-03-26 16:50:50.665826 | orchestrator | ok: [testbed-node-5] 2025-03-26 16:50:50.666642 | orchestrator | 2025-03-26 16:50:50.667478 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 16:50:50.668334 | orchestrator | Wednesday 26 March 2025 16:50:50 +0000 (0:00:00.242) 0:00:58.195 ******* 2025-03-26 16:50:51.239423 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-03-26 16:50:51.239839 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-03-26 16:50:51.240470 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-03-26 16:50:51.240570 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-03-26 16:50:51.241596 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-03-26 16:50:51.242250 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-03-26 16:50:51.242553 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-03-26 16:50:51.243363 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-03-26 16:50:51.243658 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-03-26 16:50:51.243690 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-03-26 16:50:51.244605 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-03-26 16:50:51.248014 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-03-26 16:50:51.477807 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-03-26 16:50:51.477915 | orchestrator | 2025-03-26 16:50:51.477932 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 16:50:51.477949 | orchestrator | Wednesday 26 March 2025 16:50:51 +0000 (0:00:00.574) 0:00:58.770 ******* 2025-03-26 16:50:51.477979 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:50:51.478757 | orchestrator | 2025-03-26 16:50:51.479019 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 16:50:51.480113 | orchestrator | Wednesday 26 March 2025 16:50:51 +0000 (0:00:00.237) 0:00:59.007 ******* 2025-03-26 16:50:51.677691 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:50:51.677876 | orchestrator | 2025-03-26 16:50:51.677904 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 16:50:51.678728 | orchestrator | Wednesday 26 March 2025 16:50:51 +0000 (0:00:00.199) 0:00:59.207 ******* 2025-03-26 16:50:51.877135 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:50:51.877276 | orchestrator | 2025-03-26 16:50:51.877562 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 16:50:51.877900 | orchestrator | Wednesday 26 March 2025 16:50:51 +0000 (0:00:00.201) 0:00:59.408 ******* 2025-03-26 16:50:52.435288 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:50:52.436092 | orchestrator | 2025-03-26 16:50:52.436440 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 16:50:52.438368 | orchestrator | Wednesday 26 March 2025 16:50:52 +0000 (0:00:00.555) 0:00:59.964 ******* 2025-03-26 16:50:52.630775 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:50:52.632836 | orchestrator | 2025-03-26 16:50:52.635526 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 16:50:52.635561 | orchestrator | Wednesday 26 March 2025 16:50:52 +0000 (0:00:00.196) 0:01:00.160 ******* 2025-03-26 16:50:52.858895 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:50:52.859552 | orchestrator | 2025-03-26 16:50:52.860240 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 16:50:52.861020 | orchestrator | Wednesday 26 March 2025 16:50:52 +0000 (0:00:00.229) 0:01:00.390 ******* 2025-03-26 16:50:53.065879 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:50:53.066614 | orchestrator | 2025-03-26 16:50:53.067336 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 16:50:53.068070 | orchestrator | Wednesday 26 March 2025 16:50:53 +0000 (0:00:00.206) 0:01:00.597 ******* 2025-03-26 16:50:53.262415 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:50:53.262543 | orchestrator | 2025-03-26 16:50:53.263642 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 16:50:53.263927 | orchestrator | Wednesday 26 March 2025 16:50:53 +0000 (0:00:00.196) 0:01:00.793 ******* 2025-03-26 16:50:53.778141 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_778c827f-9114-48f3-8218-520802856b8b) 2025-03-26 16:50:53.779133 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_778c827f-9114-48f3-8218-520802856b8b) 2025-03-26 16:50:53.780023 | orchestrator | 2025-03-26 16:50:53.780582 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 16:50:53.783497 | orchestrator | Wednesday 26 March 2025 16:50:53 +0000 (0:00:00.514) 0:01:01.308 ******* 2025-03-26 16:50:54.269478 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_47c04996-6b70-4edc-bb1e-4eda8ee8f7b1) 2025-03-26 16:50:54.270528 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_47c04996-6b70-4edc-bb1e-4eda8ee8f7b1) 2025-03-26 16:50:54.272067 | orchestrator | 2025-03-26 16:50:54.274091 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 16:50:54.274604 | orchestrator | Wednesday 26 March 2025 16:50:54 +0000 (0:00:00.491) 0:01:01.800 ******* 2025-03-26 16:50:54.764214 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_018eeca1-6df5-40cc-92d0-1bd4b2f4c48a) 2025-03-26 16:50:54.764431 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_018eeca1-6df5-40cc-92d0-1bd4b2f4c48a) 2025-03-26 16:50:54.764547 | orchestrator | 2025-03-26 16:50:54.765303 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 16:50:54.765426 | orchestrator | Wednesday 26 March 2025 16:50:54 +0000 (0:00:00.493) 0:01:02.294 ******* 2025-03-26 16:50:55.591252 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_173128c2-f7b8-43d8-ad86-e66bd69e13dc) 2025-03-26 16:50:55.591650 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_173128c2-f7b8-43d8-ad86-e66bd69e13dc) 2025-03-26 16:50:55.591841 | orchestrator | 2025-03-26 16:50:55.591877 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-03-26 16:50:55.592605 | orchestrator | Wednesday 26 March 2025 16:50:55 +0000 (0:00:00.827) 0:01:03.121 ******* 2025-03-26 16:50:56.553485 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-03-26 16:50:56.553701 | orchestrator | 2025-03-26 16:50:56.554342 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 16:50:56.554413 | orchestrator | Wednesday 26 March 2025 16:50:56 +0000 (0:00:00.962) 0:01:04.083 ******* 2025-03-26 16:50:57.147122 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-03-26 16:50:57.147438 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-03-26 16:50:57.148099 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-03-26 16:50:57.148764 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-03-26 16:50:57.149837 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-03-26 16:50:57.149869 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-03-26 16:50:57.150161 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-03-26 16:50:57.151264 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-03-26 16:50:57.153779 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-03-26 16:50:57.154092 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-03-26 16:50:57.154121 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-03-26 16:50:57.154136 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-03-26 16:50:57.154178 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-03-26 16:50:57.154199 | orchestrator | 2025-03-26 16:50:57.154333 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 16:50:57.154361 | orchestrator | Wednesday 26 March 2025 16:50:57 +0000 (0:00:00.593) 0:01:04.677 ******* 2025-03-26 16:50:57.371889 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:50:57.372290 | orchestrator | 2025-03-26 16:50:57.372320 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 16:50:57.372501 | orchestrator | Wednesday 26 March 2025 16:50:57 +0000 (0:00:00.226) 0:01:04.903 ******* 2025-03-26 16:50:57.583360 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:50:57.583961 | orchestrator | 2025-03-26 16:50:57.583996 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 16:50:57.584405 | orchestrator | Wednesday 26 March 2025 16:50:57 +0000 (0:00:00.210) 0:01:05.113 ******* 2025-03-26 16:50:57.815354 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:50:57.815734 | orchestrator | 2025-03-26 16:50:57.816564 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 16:50:57.816855 | orchestrator | Wednesday 26 March 2025 16:50:57 +0000 (0:00:00.232) 0:01:05.346 ******* 2025-03-26 16:50:58.047495 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:50:58.048276 | orchestrator | 2025-03-26 16:50:58.048580 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 16:50:58.049666 | orchestrator | Wednesday 26 March 2025 16:50:58 +0000 (0:00:00.232) 0:01:05.578 ******* 2025-03-26 16:50:58.263014 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:50:58.263871 | orchestrator | 2025-03-26 16:50:58.263906 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 16:50:58.264185 | orchestrator | Wednesday 26 March 2025 16:50:58 +0000 (0:00:00.215) 0:01:05.794 ******* 2025-03-26 16:50:58.481192 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:50:58.482626 | orchestrator | 2025-03-26 16:50:58.484038 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 16:50:58.484518 | orchestrator | Wednesday 26 March 2025 16:50:58 +0000 (0:00:00.218) 0:01:06.012 ******* 2025-03-26 16:50:58.695827 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:50:58.696274 | orchestrator | 2025-03-26 16:50:58.697270 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 16:50:58.698348 | orchestrator | Wednesday 26 March 2025 16:50:58 +0000 (0:00:00.211) 0:01:06.224 ******* 2025-03-26 16:50:58.920806 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:50:58.921522 | orchestrator | 2025-03-26 16:50:58.921596 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 16:50:58.921692 | orchestrator | Wednesday 26 March 2025 16:50:58 +0000 (0:00:00.227) 0:01:06.452 ******* 2025-03-26 16:50:59.968835 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-03-26 16:50:59.969016 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-03-26 16:50:59.970149 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-03-26 16:50:59.970757 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-03-26 16:50:59.972099 | orchestrator | 2025-03-26 16:50:59.972914 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 16:50:59.974124 | orchestrator | Wednesday 26 March 2025 16:50:59 +0000 (0:00:01.043) 0:01:07.496 ******* 2025-03-26 16:51:00.197060 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:00.197748 | orchestrator | 2025-03-26 16:51:00.198119 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 16:51:00.198828 | orchestrator | Wednesday 26 March 2025 16:51:00 +0000 (0:00:00.232) 0:01:07.728 ******* 2025-03-26 16:51:00.410244 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:00.411160 | orchestrator | 2025-03-26 16:51:00.411761 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 16:51:00.412208 | orchestrator | Wednesday 26 March 2025 16:51:00 +0000 (0:00:00.212) 0:01:07.940 ******* 2025-03-26 16:51:00.638006 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:00.638868 | orchestrator | 2025-03-26 16:51:00.638914 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-03-26 16:51:00.639338 | orchestrator | Wednesday 26 March 2025 16:51:00 +0000 (0:00:00.227) 0:01:08.168 ******* 2025-03-26 16:51:00.876617 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:00.876766 | orchestrator | 2025-03-26 16:51:00.876921 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-03-26 16:51:00.877762 | orchestrator | Wednesday 26 March 2025 16:51:00 +0000 (0:00:00.239) 0:01:08.407 ******* 2025-03-26 16:51:01.022767 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:01.022888 | orchestrator | 2025-03-26 16:51:01.023805 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-03-26 16:51:01.024691 | orchestrator | Wednesday 26 March 2025 16:51:01 +0000 (0:00:00.145) 0:01:08.553 ******* 2025-03-26 16:51:01.283816 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a669efbe-38ad-5491-a7b2-472b52b48777'}}) 2025-03-26 16:51:01.284281 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ec9d4bae-9cf1-5a1f-8035-4dbd27640959'}}) 2025-03-26 16:51:01.285096 | orchestrator | 2025-03-26 16:51:01.285829 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-03-26 16:51:01.286530 | orchestrator | Wednesday 26 March 2025 16:51:01 +0000 (0:00:00.261) 0:01:08.815 ******* 2025-03-26 16:51:03.536979 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a669efbe-38ad-5491-a7b2-472b52b48777', 'data_vg': 'ceph-a669efbe-38ad-5491-a7b2-472b52b48777'}) 2025-03-26 16:51:03.537868 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ec9d4bae-9cf1-5a1f-8035-4dbd27640959', 'data_vg': 'ceph-ec9d4bae-9cf1-5a1f-8035-4dbd27640959'}) 2025-03-26 16:51:03.537911 | orchestrator | 2025-03-26 16:51:03.538683 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-03-26 16:51:03.539158 | orchestrator | Wednesday 26 March 2025 16:51:03 +0000 (0:00:02.250) 0:01:11.065 ******* 2025-03-26 16:51:03.721224 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a669efbe-38ad-5491-a7b2-472b52b48777', 'data_vg': 'ceph-a669efbe-38ad-5491-a7b2-472b52b48777'})  2025-03-26 16:51:03.721322 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ec9d4bae-9cf1-5a1f-8035-4dbd27640959', 'data_vg': 'ceph-ec9d4bae-9cf1-5a1f-8035-4dbd27640959'})  2025-03-26 16:51:03.722359 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:03.722970 | orchestrator | 2025-03-26 16:51:03.723614 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-03-26 16:51:03.724634 | orchestrator | Wednesday 26 March 2025 16:51:03 +0000 (0:00:00.185) 0:01:11.251 ******* 2025-03-26 16:51:05.110128 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a669efbe-38ad-5491-a7b2-472b52b48777', 'data_vg': 'ceph-a669efbe-38ad-5491-a7b2-472b52b48777'}) 2025-03-26 16:51:05.110851 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ec9d4bae-9cf1-5a1f-8035-4dbd27640959', 'data_vg': 'ceph-ec9d4bae-9cf1-5a1f-8035-4dbd27640959'}) 2025-03-26 16:51:05.111360 | orchestrator | 2025-03-26 16:51:05.111627 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-03-26 16:51:05.112326 | orchestrator | Wednesday 26 March 2025 16:51:05 +0000 (0:00:01.388) 0:01:12.639 ******* 2025-03-26 16:51:05.289903 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a669efbe-38ad-5491-a7b2-472b52b48777', 'data_vg': 'ceph-a669efbe-38ad-5491-a7b2-472b52b48777'})  2025-03-26 16:51:05.290852 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ec9d4bae-9cf1-5a1f-8035-4dbd27640959', 'data_vg': 'ceph-ec9d4bae-9cf1-5a1f-8035-4dbd27640959'})  2025-03-26 16:51:05.291618 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:05.292599 | orchestrator | 2025-03-26 16:51:05.295022 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-03-26 16:51:05.296290 | orchestrator | Wednesday 26 March 2025 16:51:05 +0000 (0:00:00.178) 0:01:12.818 ******* 2025-03-26 16:51:05.448196 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:05.448661 | orchestrator | 2025-03-26 16:51:05.449613 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-03-26 16:51:05.450383 | orchestrator | Wednesday 26 March 2025 16:51:05 +0000 (0:00:00.160) 0:01:12.979 ******* 2025-03-26 16:51:05.627375 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a669efbe-38ad-5491-a7b2-472b52b48777', 'data_vg': 'ceph-a669efbe-38ad-5491-a7b2-472b52b48777'})  2025-03-26 16:51:05.627569 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ec9d4bae-9cf1-5a1f-8035-4dbd27640959', 'data_vg': 'ceph-ec9d4bae-9cf1-5a1f-8035-4dbd27640959'})  2025-03-26 16:51:05.627990 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:05.628338 | orchestrator | 2025-03-26 16:51:05.628814 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-03-26 16:51:05.629068 | orchestrator | Wednesday 26 March 2025 16:51:05 +0000 (0:00:00.179) 0:01:13.158 ******* 2025-03-26 16:51:05.780291 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:05.781312 | orchestrator | 2025-03-26 16:51:05.781638 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-03-26 16:51:05.782745 | orchestrator | Wednesday 26 March 2025 16:51:05 +0000 (0:00:00.152) 0:01:13.311 ******* 2025-03-26 16:51:05.975706 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a669efbe-38ad-5491-a7b2-472b52b48777', 'data_vg': 'ceph-a669efbe-38ad-5491-a7b2-472b52b48777'})  2025-03-26 16:51:05.976194 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ec9d4bae-9cf1-5a1f-8035-4dbd27640959', 'data_vg': 'ceph-ec9d4bae-9cf1-5a1f-8035-4dbd27640959'})  2025-03-26 16:51:05.977306 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:05.978118 | orchestrator | 2025-03-26 16:51:05.979262 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-03-26 16:51:05.979687 | orchestrator | Wednesday 26 March 2025 16:51:05 +0000 (0:00:00.195) 0:01:13.506 ******* 2025-03-26 16:51:06.138981 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:06.139157 | orchestrator | 2025-03-26 16:51:06.329715 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-03-26 16:51:06.329783 | orchestrator | Wednesday 26 March 2025 16:51:06 +0000 (0:00:00.163) 0:01:13.670 ******* 2025-03-26 16:51:06.329807 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a669efbe-38ad-5491-a7b2-472b52b48777', 'data_vg': 'ceph-a669efbe-38ad-5491-a7b2-472b52b48777'})  2025-03-26 16:51:06.332362 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ec9d4bae-9cf1-5a1f-8035-4dbd27640959', 'data_vg': 'ceph-ec9d4bae-9cf1-5a1f-8035-4dbd27640959'})  2025-03-26 16:51:06.332761 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:06.332786 | orchestrator | 2025-03-26 16:51:06.332803 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-03-26 16:51:06.333251 | orchestrator | Wednesday 26 March 2025 16:51:06 +0000 (0:00:00.190) 0:01:13.861 ******* 2025-03-26 16:51:06.501742 | orchestrator | ok: [testbed-node-5] 2025-03-26 16:51:06.502316 | orchestrator | 2025-03-26 16:51:06.502961 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-03-26 16:51:06.503382 | orchestrator | Wednesday 26 March 2025 16:51:06 +0000 (0:00:00.172) 0:01:14.033 ******* 2025-03-26 16:51:06.674180 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a669efbe-38ad-5491-a7b2-472b52b48777', 'data_vg': 'ceph-a669efbe-38ad-5491-a7b2-472b52b48777'})  2025-03-26 16:51:06.675096 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ec9d4bae-9cf1-5a1f-8035-4dbd27640959', 'data_vg': 'ceph-ec9d4bae-9cf1-5a1f-8035-4dbd27640959'})  2025-03-26 16:51:06.675864 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:06.677380 | orchestrator | 2025-03-26 16:51:06.678057 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-03-26 16:51:06.678962 | orchestrator | Wednesday 26 March 2025 16:51:06 +0000 (0:00:00.170) 0:01:14.204 ******* 2025-03-26 16:51:06.848243 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a669efbe-38ad-5491-a7b2-472b52b48777', 'data_vg': 'ceph-a669efbe-38ad-5491-a7b2-472b52b48777'})  2025-03-26 16:51:06.848932 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ec9d4bae-9cf1-5a1f-8035-4dbd27640959', 'data_vg': 'ceph-ec9d4bae-9cf1-5a1f-8035-4dbd27640959'})  2025-03-26 16:51:06.849373 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:06.850152 | orchestrator | 2025-03-26 16:51:06.850337 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-03-26 16:51:06.851122 | orchestrator | Wednesday 26 March 2025 16:51:06 +0000 (0:00:00.174) 0:01:14.379 ******* 2025-03-26 16:51:07.282479 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a669efbe-38ad-5491-a7b2-472b52b48777', 'data_vg': 'ceph-a669efbe-38ad-5491-a7b2-472b52b48777'})  2025-03-26 16:51:07.283280 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ec9d4bae-9cf1-5a1f-8035-4dbd27640959', 'data_vg': 'ceph-ec9d4bae-9cf1-5a1f-8035-4dbd27640959'})  2025-03-26 16:51:07.284466 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:07.285075 | orchestrator | 2025-03-26 16:51:07.285500 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-03-26 16:51:07.285870 | orchestrator | Wednesday 26 March 2025 16:51:07 +0000 (0:00:00.435) 0:01:14.814 ******* 2025-03-26 16:51:07.427745 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:07.427950 | orchestrator | 2025-03-26 16:51:07.428609 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-03-26 16:51:07.429495 | orchestrator | Wednesday 26 March 2025 16:51:07 +0000 (0:00:00.144) 0:01:14.958 ******* 2025-03-26 16:51:07.623830 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:07.624253 | orchestrator | 2025-03-26 16:51:07.624989 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-03-26 16:51:07.625757 | orchestrator | Wednesday 26 March 2025 16:51:07 +0000 (0:00:00.196) 0:01:15.155 ******* 2025-03-26 16:51:07.773194 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:07.774959 | orchestrator | 2025-03-26 16:51:07.775734 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-03-26 16:51:07.776101 | orchestrator | Wednesday 26 March 2025 16:51:07 +0000 (0:00:00.148) 0:01:15.303 ******* 2025-03-26 16:51:07.926186 | orchestrator | ok: [testbed-node-5] => { 2025-03-26 16:51:07.927172 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-03-26 16:51:07.927202 | orchestrator | } 2025-03-26 16:51:07.927666 | orchestrator | 2025-03-26 16:51:07.929209 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-03-26 16:51:07.929509 | orchestrator | Wednesday 26 March 2025 16:51:07 +0000 (0:00:00.152) 0:01:15.456 ******* 2025-03-26 16:51:08.080021 | orchestrator | ok: [testbed-node-5] => { 2025-03-26 16:51:08.080618 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-03-26 16:51:08.081719 | orchestrator | } 2025-03-26 16:51:08.082671 | orchestrator | 2025-03-26 16:51:08.082856 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-03-26 16:51:08.083246 | orchestrator | Wednesday 26 March 2025 16:51:08 +0000 (0:00:00.153) 0:01:15.609 ******* 2025-03-26 16:51:08.223062 | orchestrator | ok: [testbed-node-5] => { 2025-03-26 16:51:08.223263 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-03-26 16:51:08.224347 | orchestrator | } 2025-03-26 16:51:08.224985 | orchestrator | 2025-03-26 16:51:08.225250 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-03-26 16:51:08.225541 | orchestrator | Wednesday 26 March 2025 16:51:08 +0000 (0:00:00.144) 0:01:15.754 ******* 2025-03-26 16:51:08.771309 | orchestrator | ok: [testbed-node-5] 2025-03-26 16:51:08.772298 | orchestrator | 2025-03-26 16:51:08.772744 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-03-26 16:51:08.773843 | orchestrator | Wednesday 26 March 2025 16:51:08 +0000 (0:00:00.546) 0:01:16.300 ******* 2025-03-26 16:51:09.377331 | orchestrator | ok: [testbed-node-5] 2025-03-26 16:51:09.378889 | orchestrator | 2025-03-26 16:51:09.378940 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-03-26 16:51:09.381450 | orchestrator | Wednesday 26 March 2025 16:51:09 +0000 (0:00:00.599) 0:01:16.900 ******* 2025-03-26 16:51:09.903825 | orchestrator | ok: [testbed-node-5] 2025-03-26 16:51:09.904395 | orchestrator | 2025-03-26 16:51:09.904463 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-03-26 16:51:09.907225 | orchestrator | Wednesday 26 March 2025 16:51:09 +0000 (0:00:00.531) 0:01:17.431 ******* 2025-03-26 16:51:10.049249 | orchestrator | ok: [testbed-node-5] 2025-03-26 16:51:10.050333 | orchestrator | 2025-03-26 16:51:10.051539 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-03-26 16:51:10.051995 | orchestrator | Wednesday 26 March 2025 16:51:10 +0000 (0:00:00.145) 0:01:17.577 ******* 2025-03-26 16:51:10.409500 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:10.410317 | orchestrator | 2025-03-26 16:51:10.410745 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-03-26 16:51:10.411498 | orchestrator | Wednesday 26 March 2025 16:51:10 +0000 (0:00:00.362) 0:01:17.940 ******* 2025-03-26 16:51:10.532384 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:10.532615 | orchestrator | 2025-03-26 16:51:10.532640 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-03-26 16:51:10.532660 | orchestrator | Wednesday 26 March 2025 16:51:10 +0000 (0:00:00.122) 0:01:18.063 ******* 2025-03-26 16:51:10.689192 | orchestrator | ok: [testbed-node-5] => { 2025-03-26 16:51:10.689523 | orchestrator |  "vgs_report": { 2025-03-26 16:51:10.689678 | orchestrator |  "vg": [] 2025-03-26 16:51:10.691183 | orchestrator |  } 2025-03-26 16:51:10.691272 | orchestrator | } 2025-03-26 16:51:10.691955 | orchestrator | 2025-03-26 16:51:10.692378 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-03-26 16:51:10.695486 | orchestrator | Wednesday 26 March 2025 16:51:10 +0000 (0:00:00.157) 0:01:18.220 ******* 2025-03-26 16:51:10.841901 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:10.842345 | orchestrator | 2025-03-26 16:51:10.842679 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-03-26 16:51:10.843128 | orchestrator | Wednesday 26 March 2025 16:51:10 +0000 (0:00:00.151) 0:01:18.372 ******* 2025-03-26 16:51:10.994901 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:10.995250 | orchestrator | 2025-03-26 16:51:10.996254 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-03-26 16:51:10.997705 | orchestrator | Wednesday 26 March 2025 16:51:10 +0000 (0:00:00.151) 0:01:18.523 ******* 2025-03-26 16:51:11.148138 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:11.148513 | orchestrator | 2025-03-26 16:51:11.148747 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-03-26 16:51:11.149233 | orchestrator | Wednesday 26 March 2025 16:51:11 +0000 (0:00:00.155) 0:01:18.679 ******* 2025-03-26 16:51:11.289663 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:11.289828 | orchestrator | 2025-03-26 16:51:11.290287 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-03-26 16:51:11.290759 | orchestrator | Wednesday 26 March 2025 16:51:11 +0000 (0:00:00.140) 0:01:18.819 ******* 2025-03-26 16:51:11.443970 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:11.444176 | orchestrator | 2025-03-26 16:51:11.445088 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-03-26 16:51:11.446086 | orchestrator | Wednesday 26 March 2025 16:51:11 +0000 (0:00:00.155) 0:01:18.975 ******* 2025-03-26 16:51:11.602436 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:11.602821 | orchestrator | 2025-03-26 16:51:11.603169 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-03-26 16:51:11.603910 | orchestrator | Wednesday 26 March 2025 16:51:11 +0000 (0:00:00.158) 0:01:19.133 ******* 2025-03-26 16:51:11.766231 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:11.766445 | orchestrator | 2025-03-26 16:51:11.767557 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-03-26 16:51:11.768169 | orchestrator | Wednesday 26 March 2025 16:51:11 +0000 (0:00:00.161) 0:01:19.295 ******* 2025-03-26 16:51:11.889170 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:11.889686 | orchestrator | 2025-03-26 16:51:11.890640 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-03-26 16:51:11.891588 | orchestrator | Wednesday 26 March 2025 16:51:11 +0000 (0:00:00.125) 0:01:19.420 ******* 2025-03-26 16:51:12.047635 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:12.048134 | orchestrator | 2025-03-26 16:51:12.048801 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-03-26 16:51:12.049643 | orchestrator | Wednesday 26 March 2025 16:51:12 +0000 (0:00:00.157) 0:01:19.577 ******* 2025-03-26 16:51:12.477036 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:12.478132 | orchestrator | 2025-03-26 16:51:12.478172 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-03-26 16:51:12.479546 | orchestrator | Wednesday 26 March 2025 16:51:12 +0000 (0:00:00.430) 0:01:20.008 ******* 2025-03-26 16:51:12.633308 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:12.634162 | orchestrator | 2025-03-26 16:51:12.634196 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-03-26 16:51:12.634784 | orchestrator | Wednesday 26 March 2025 16:51:12 +0000 (0:00:00.155) 0:01:20.164 ******* 2025-03-26 16:51:12.812738 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:12.813830 | orchestrator | 2025-03-26 16:51:12.813867 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-03-26 16:51:12.815201 | orchestrator | Wednesday 26 March 2025 16:51:12 +0000 (0:00:00.177) 0:01:20.341 ******* 2025-03-26 16:51:12.961314 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:12.962068 | orchestrator | 2025-03-26 16:51:12.963445 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-03-26 16:51:12.964713 | orchestrator | Wednesday 26 March 2025 16:51:12 +0000 (0:00:00.148) 0:01:20.490 ******* 2025-03-26 16:51:13.108868 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:13.109696 | orchestrator | 2025-03-26 16:51:13.109728 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-03-26 16:51:13.110578 | orchestrator | Wednesday 26 March 2025 16:51:13 +0000 (0:00:00.149) 0:01:20.639 ******* 2025-03-26 16:51:13.287348 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a669efbe-38ad-5491-a7b2-472b52b48777', 'data_vg': 'ceph-a669efbe-38ad-5491-a7b2-472b52b48777'})  2025-03-26 16:51:13.288925 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ec9d4bae-9cf1-5a1f-8035-4dbd27640959', 'data_vg': 'ceph-ec9d4bae-9cf1-5a1f-8035-4dbd27640959'})  2025-03-26 16:51:13.291380 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:13.292236 | orchestrator | 2025-03-26 16:51:13.293295 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-03-26 16:51:13.294495 | orchestrator | Wednesday 26 March 2025 16:51:13 +0000 (0:00:00.177) 0:01:20.817 ******* 2025-03-26 16:51:13.470269 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a669efbe-38ad-5491-a7b2-472b52b48777', 'data_vg': 'ceph-a669efbe-38ad-5491-a7b2-472b52b48777'})  2025-03-26 16:51:13.471678 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ec9d4bae-9cf1-5a1f-8035-4dbd27640959', 'data_vg': 'ceph-ec9d4bae-9cf1-5a1f-8035-4dbd27640959'})  2025-03-26 16:51:13.472734 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:13.473786 | orchestrator | 2025-03-26 16:51:13.474622 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-03-26 16:51:13.475347 | orchestrator | Wednesday 26 March 2025 16:51:13 +0000 (0:00:00.182) 0:01:20.999 ******* 2025-03-26 16:51:13.636618 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a669efbe-38ad-5491-a7b2-472b52b48777', 'data_vg': 'ceph-a669efbe-38ad-5491-a7b2-472b52b48777'})  2025-03-26 16:51:13.638626 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ec9d4bae-9cf1-5a1f-8035-4dbd27640959', 'data_vg': 'ceph-ec9d4bae-9cf1-5a1f-8035-4dbd27640959'})  2025-03-26 16:51:13.640570 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:13.641822 | orchestrator | 2025-03-26 16:51:13.643408 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-03-26 16:51:13.644406 | orchestrator | Wednesday 26 March 2025 16:51:13 +0000 (0:00:00.166) 0:01:21.166 ******* 2025-03-26 16:51:13.816219 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a669efbe-38ad-5491-a7b2-472b52b48777', 'data_vg': 'ceph-a669efbe-38ad-5491-a7b2-472b52b48777'})  2025-03-26 16:51:13.816781 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ec9d4bae-9cf1-5a1f-8035-4dbd27640959', 'data_vg': 'ceph-ec9d4bae-9cf1-5a1f-8035-4dbd27640959'})  2025-03-26 16:51:13.817280 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:13.818143 | orchestrator | 2025-03-26 16:51:13.818949 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-03-26 16:51:13.819322 | orchestrator | Wednesday 26 March 2025 16:51:13 +0000 (0:00:00.180) 0:01:21.346 ******* 2025-03-26 16:51:13.999933 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a669efbe-38ad-5491-a7b2-472b52b48777', 'data_vg': 'ceph-a669efbe-38ad-5491-a7b2-472b52b48777'})  2025-03-26 16:51:14.000480 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ec9d4bae-9cf1-5a1f-8035-4dbd27640959', 'data_vg': 'ceph-ec9d4bae-9cf1-5a1f-8035-4dbd27640959'})  2025-03-26 16:51:14.001560 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:14.002831 | orchestrator | 2025-03-26 16:51:14.003177 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-03-26 16:51:14.004131 | orchestrator | Wednesday 26 March 2025 16:51:13 +0000 (0:00:00.184) 0:01:21.530 ******* 2025-03-26 16:51:14.171180 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a669efbe-38ad-5491-a7b2-472b52b48777', 'data_vg': 'ceph-a669efbe-38ad-5491-a7b2-472b52b48777'})  2025-03-26 16:51:14.172552 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ec9d4bae-9cf1-5a1f-8035-4dbd27640959', 'data_vg': 'ceph-ec9d4bae-9cf1-5a1f-8035-4dbd27640959'})  2025-03-26 16:51:14.173741 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:14.176686 | orchestrator | 2025-03-26 16:51:14.182654 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-03-26 16:51:14.182784 | orchestrator | Wednesday 26 March 2025 16:51:14 +0000 (0:00:00.172) 0:01:21.703 ******* 2025-03-26 16:51:14.606800 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a669efbe-38ad-5491-a7b2-472b52b48777', 'data_vg': 'ceph-a669efbe-38ad-5491-a7b2-472b52b48777'})  2025-03-26 16:51:14.607495 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ec9d4bae-9cf1-5a1f-8035-4dbd27640959', 'data_vg': 'ceph-ec9d4bae-9cf1-5a1f-8035-4dbd27640959'})  2025-03-26 16:51:14.608193 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:14.608630 | orchestrator | 2025-03-26 16:51:14.609482 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-03-26 16:51:14.609880 | orchestrator | Wednesday 26 March 2025 16:51:14 +0000 (0:00:00.434) 0:01:22.137 ******* 2025-03-26 16:51:14.833273 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a669efbe-38ad-5491-a7b2-472b52b48777', 'data_vg': 'ceph-a669efbe-38ad-5491-a7b2-472b52b48777'})  2025-03-26 16:51:14.834403 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ec9d4bae-9cf1-5a1f-8035-4dbd27640959', 'data_vg': 'ceph-ec9d4bae-9cf1-5a1f-8035-4dbd27640959'})  2025-03-26 16:51:14.834718 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:14.835566 | orchestrator | 2025-03-26 16:51:14.836360 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-03-26 16:51:14.836959 | orchestrator | Wednesday 26 March 2025 16:51:14 +0000 (0:00:00.224) 0:01:22.362 ******* 2025-03-26 16:51:15.362238 | orchestrator | ok: [testbed-node-5] 2025-03-26 16:51:15.362349 | orchestrator | 2025-03-26 16:51:15.362372 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-03-26 16:51:15.896723 | orchestrator | Wednesday 26 March 2025 16:51:15 +0000 (0:00:00.529) 0:01:22.892 ******* 2025-03-26 16:51:15.896810 | orchestrator | ok: [testbed-node-5] 2025-03-26 16:51:15.896870 | orchestrator | 2025-03-26 16:51:15.896891 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-03-26 16:51:15.897312 | orchestrator | Wednesday 26 March 2025 16:51:15 +0000 (0:00:00.533) 0:01:23.426 ******* 2025-03-26 16:51:16.046157 | orchestrator | ok: [testbed-node-5] 2025-03-26 16:51:16.046663 | orchestrator | 2025-03-26 16:51:16.048324 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-03-26 16:51:16.049350 | orchestrator | Wednesday 26 March 2025 16:51:16 +0000 (0:00:00.150) 0:01:23.577 ******* 2025-03-26 16:51:16.243858 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-a669efbe-38ad-5491-a7b2-472b52b48777', 'vg_name': 'ceph-a669efbe-38ad-5491-a7b2-472b52b48777'}) 2025-03-26 16:51:16.244829 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-ec9d4bae-9cf1-5a1f-8035-4dbd27640959', 'vg_name': 'ceph-ec9d4bae-9cf1-5a1f-8035-4dbd27640959'}) 2025-03-26 16:51:16.244875 | orchestrator | 2025-03-26 16:51:16.245390 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-03-26 16:51:16.246161 | orchestrator | Wednesday 26 March 2025 16:51:16 +0000 (0:00:00.197) 0:01:23.775 ******* 2025-03-26 16:51:16.442614 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a669efbe-38ad-5491-a7b2-472b52b48777', 'data_vg': 'ceph-a669efbe-38ad-5491-a7b2-472b52b48777'})  2025-03-26 16:51:16.443013 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ec9d4bae-9cf1-5a1f-8035-4dbd27640959', 'data_vg': 'ceph-ec9d4bae-9cf1-5a1f-8035-4dbd27640959'})  2025-03-26 16:51:16.443050 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:16.443078 | orchestrator | 2025-03-26 16:51:16.444012 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-03-26 16:51:16.626954 | orchestrator | Wednesday 26 March 2025 16:51:16 +0000 (0:00:00.198) 0:01:23.973 ******* 2025-03-26 16:51:16.627035 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a669efbe-38ad-5491-a7b2-472b52b48777', 'data_vg': 'ceph-a669efbe-38ad-5491-a7b2-472b52b48777'})  2025-03-26 16:51:16.627577 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ec9d4bae-9cf1-5a1f-8035-4dbd27640959', 'data_vg': 'ceph-ec9d4bae-9cf1-5a1f-8035-4dbd27640959'})  2025-03-26 16:51:16.628178 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:16.629267 | orchestrator | 2025-03-26 16:51:16.629961 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-03-26 16:51:16.630738 | orchestrator | Wednesday 26 March 2025 16:51:16 +0000 (0:00:00.183) 0:01:24.157 ******* 2025-03-26 16:51:16.822001 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a669efbe-38ad-5491-a7b2-472b52b48777', 'data_vg': 'ceph-a669efbe-38ad-5491-a7b2-472b52b48777'})  2025-03-26 16:51:16.822833 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ec9d4bae-9cf1-5a1f-8035-4dbd27640959', 'data_vg': 'ceph-ec9d4bae-9cf1-5a1f-8035-4dbd27640959'})  2025-03-26 16:51:16.822878 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:16.823684 | orchestrator | 2025-03-26 16:51:16.823945 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-03-26 16:51:16.825452 | orchestrator | Wednesday 26 March 2025 16:51:16 +0000 (0:00:00.195) 0:01:24.352 ******* 2025-03-26 16:51:17.566234 | orchestrator | ok: [testbed-node-5] => { 2025-03-26 16:51:17.566712 | orchestrator |  "lvm_report": { 2025-03-26 16:51:17.566756 | orchestrator |  "lv": [ 2025-03-26 16:51:17.566878 | orchestrator |  { 2025-03-26 16:51:17.567481 | orchestrator |  "lv_name": "osd-block-a669efbe-38ad-5491-a7b2-472b52b48777", 2025-03-26 16:51:17.568129 | orchestrator |  "vg_name": "ceph-a669efbe-38ad-5491-a7b2-472b52b48777" 2025-03-26 16:51:17.568220 | orchestrator |  }, 2025-03-26 16:51:17.568853 | orchestrator |  { 2025-03-26 16:51:17.569682 | orchestrator |  "lv_name": "osd-block-ec9d4bae-9cf1-5a1f-8035-4dbd27640959", 2025-03-26 16:51:17.569762 | orchestrator |  "vg_name": "ceph-ec9d4bae-9cf1-5a1f-8035-4dbd27640959" 2025-03-26 16:51:17.570458 | orchestrator |  } 2025-03-26 16:51:17.570603 | orchestrator |  ], 2025-03-26 16:51:17.571046 | orchestrator |  "pv": [ 2025-03-26 16:51:17.571544 | orchestrator |  { 2025-03-26 16:51:17.571627 | orchestrator |  "pv_name": "/dev/sdb", 2025-03-26 16:51:17.572526 | orchestrator |  "vg_name": "ceph-a669efbe-38ad-5491-a7b2-472b52b48777" 2025-03-26 16:51:17.573090 | orchestrator |  }, 2025-03-26 16:51:17.573843 | orchestrator |  { 2025-03-26 16:51:17.574631 | orchestrator |  "pv_name": "/dev/sdc", 2025-03-26 16:51:17.575029 | orchestrator |  "vg_name": "ceph-ec9d4bae-9cf1-5a1f-8035-4dbd27640959" 2025-03-26 16:51:17.575373 | orchestrator |  } 2025-03-26 16:51:17.576007 | orchestrator |  ] 2025-03-26 16:51:17.576279 | orchestrator |  } 2025-03-26 16:51:17.576705 | orchestrator | } 2025-03-26 16:51:17.577680 | orchestrator | 2025-03-26 16:51:17.578615 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-26 16:51:17.578693 | orchestrator | 2025-03-26 16:51:17 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-26 16:51:17.578788 | orchestrator | 2025-03-26 16:51:17 | INFO  | Please wait and do not abort execution. 2025-03-26 16:51:17.580001 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-03-26 16:51:17.580510 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-03-26 16:51:17.580980 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-03-26 16:51:17.581322 | orchestrator | 2025-03-26 16:51:17.581723 | orchestrator | 2025-03-26 16:51:17.582079 | orchestrator | 2025-03-26 16:51:17.582359 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-26 16:51:17.583082 | orchestrator | Wednesday 26 March 2025 16:51:17 +0000 (0:00:00.745) 0:01:25.097 ******* 2025-03-26 16:51:17.583605 | orchestrator | =============================================================================== 2025-03-26 16:51:17.584093 | orchestrator | Create block VGs -------------------------------------------------------- 6.83s 2025-03-26 16:51:17.584775 | orchestrator | Create block LVs -------------------------------------------------------- 4.37s 2025-03-26 16:51:17.585125 | orchestrator | Print LVM report data --------------------------------------------------- 2.90s 2025-03-26 16:51:17.586193 | orchestrator | Add known links to the list of available block devices ------------------ 1.91s 2025-03-26 16:51:17.586608 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.87s 2025-03-26 16:51:17.586650 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.86s 2025-03-26 16:51:17.586847 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.75s 2025-03-26 16:51:17.587088 | orchestrator | Add known partitions to the list of available block devices ------------- 1.67s 2025-03-26 16:51:17.587501 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.67s 2025-03-26 16:51:17.587914 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.67s 2025-03-26 16:51:17.588460 | orchestrator | Add known partitions to the list of available block devices ------------- 1.04s 2025-03-26 16:51:17.589115 | orchestrator | Add known partitions to the list of available block devices ------------- 0.99s 2025-03-26 16:51:17.589398 | orchestrator | Add known links to the list of available block devices ------------------ 0.96s 2025-03-26 16:51:17.590471 | orchestrator | Create dict of block VGs -> PVs from ceph_osd_devices ------------------- 0.95s 2025-03-26 16:51:17.590794 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.95s 2025-03-26 16:51:17.591223 | orchestrator | Create list of VG/LV names ---------------------------------------------- 0.90s 2025-03-26 16:51:17.591570 | orchestrator | Add known links to the list of available block devices ------------------ 0.83s 2025-03-26 16:51:17.592019 | orchestrator | Count OSDs put on ceph_db_wal_devices defined in lvm_volumes ------------ 0.83s 2025-03-26 16:51:17.592524 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.81s 2025-03-26 16:51:17.593102 | orchestrator | Print LVM VGs report data ----------------------------------------------- 0.80s 2025-03-26 16:51:19.889716 | orchestrator | 2025-03-26 16:51:19 | INFO  | Task 1cb3f782-8d7f-458b-8afc-38563d518916 (facts) was prepared for execution. 2025-03-26 16:51:23.856840 | orchestrator | 2025-03-26 16:51:19 | INFO  | It takes a moment until task 1cb3f782-8d7f-458b-8afc-38563d518916 (facts) has been started and output is visible here. 2025-03-26 16:51:23.856987 | orchestrator | 2025-03-26 16:51:23.857071 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-03-26 16:51:23.857301 | orchestrator | 2025-03-26 16:51:23.857778 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-03-26 16:51:23.858134 | orchestrator | Wednesday 26 March 2025 16:51:23 +0000 (0:00:00.274) 0:00:00.274 ******* 2025-03-26 16:51:25.439389 | orchestrator | ok: [testbed-manager] 2025-03-26 16:51:25.440008 | orchestrator | ok: [testbed-node-3] 2025-03-26 16:51:25.440050 | orchestrator | ok: [testbed-node-0] 2025-03-26 16:51:25.440487 | orchestrator | ok: [testbed-node-1] 2025-03-26 16:51:25.441246 | orchestrator | ok: [testbed-node-4] 2025-03-26 16:51:25.441966 | orchestrator | ok: [testbed-node-5] 2025-03-26 16:51:25.442586 | orchestrator | ok: [testbed-node-2] 2025-03-26 16:51:25.443517 | orchestrator | 2025-03-26 16:51:25.447019 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-03-26 16:51:25.448357 | orchestrator | Wednesday 26 March 2025 16:51:25 +0000 (0:00:01.576) 0:00:01.850 ******* 2025-03-26 16:51:25.650772 | orchestrator | skipping: [testbed-manager] 2025-03-26 16:51:25.743263 | orchestrator | skipping: [testbed-node-0] 2025-03-26 16:51:25.842813 | orchestrator | skipping: [testbed-node-1] 2025-03-26 16:51:25.947368 | orchestrator | skipping: [testbed-node-2] 2025-03-26 16:51:26.038982 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:51:27.085569 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:51:27.085770 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:27.085799 | orchestrator | 2025-03-26 16:51:27.086255 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-03-26 16:51:27.086366 | orchestrator | 2025-03-26 16:51:27.087204 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-03-26 16:51:27.087684 | orchestrator | Wednesday 26 March 2025 16:51:27 +0000 (0:00:01.654) 0:00:03.505 ******* 2025-03-26 16:51:32.129080 | orchestrator | ok: [testbed-node-2] 2025-03-26 16:51:32.129269 | orchestrator | ok: [testbed-node-1] 2025-03-26 16:51:32.129298 | orchestrator | ok: [testbed-node-0] 2025-03-26 16:51:32.129576 | orchestrator | ok: [testbed-manager] 2025-03-26 16:51:32.129802 | orchestrator | ok: [testbed-node-3] 2025-03-26 16:51:32.130242 | orchestrator | ok: [testbed-node-5] 2025-03-26 16:51:32.131206 | orchestrator | ok: [testbed-node-4] 2025-03-26 16:51:32.131315 | orchestrator | 2025-03-26 16:51:32.131634 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-03-26 16:51:32.131781 | orchestrator | 2025-03-26 16:51:32.132515 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-03-26 16:51:32.132954 | orchestrator | Wednesday 26 March 2025 16:51:32 +0000 (0:00:05.044) 0:00:08.549 ******* 2025-03-26 16:51:32.516712 | orchestrator | skipping: [testbed-manager] 2025-03-26 16:51:32.602294 | orchestrator | skipping: [testbed-node-0] 2025-03-26 16:51:32.682191 | orchestrator | skipping: [testbed-node-1] 2025-03-26 16:51:32.771435 | orchestrator | skipping: [testbed-node-2] 2025-03-26 16:51:32.860503 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:51:32.900217 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:51:32.900955 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:51:32.901721 | orchestrator | 2025-03-26 16:51:32.902627 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-26 16:51:32.902678 | orchestrator | 2025-03-26 16:51:32 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-03-26 16:51:32.903607 | orchestrator | 2025-03-26 16:51:32 | INFO  | Please wait and do not abort execution. 2025-03-26 16:51:32.903642 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-26 16:51:32.904314 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-26 16:51:32.904348 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-26 16:51:32.904409 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-26 16:51:32.904656 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-26 16:51:32.904971 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-26 16:51:32.905963 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-26 16:51:32.906196 | orchestrator | 2025-03-26 16:51:32.906543 | orchestrator | Wednesday 26 March 2025 16:51:32 +0000 (0:00:00.772) 0:00:09.321 ******* 2025-03-26 16:51:32.907120 | orchestrator | =============================================================================== 2025-03-26 16:51:32.907434 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.04s 2025-03-26 16:51:32.907678 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.65s 2025-03-26 16:51:32.908329 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.58s 2025-03-26 16:51:33.614888 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.77s 2025-03-26 16:51:33.615018 | orchestrator | 2025-03-26 16:51:33.617381 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Wed Mar 26 16:51:33 UTC 2025 2025-03-26 16:51:35.236884 | orchestrator | 2025-03-26 16:51:35.236996 | orchestrator | 2025-03-26 16:51:35 | INFO  | Collection nutshell is prepared for execution 2025-03-26 16:51:35.241883 | orchestrator | 2025-03-26 16:51:35 | INFO  | D [0] - dotfiles 2025-03-26 16:51:35.241937 | orchestrator | 2025-03-26 16:51:35 | INFO  | D [0] - homer 2025-03-26 16:51:35.243740 | orchestrator | 2025-03-26 16:51:35 | INFO  | D [0] - netdata 2025-03-26 16:51:35.243770 | orchestrator | 2025-03-26 16:51:35 | INFO  | D [0] - openstackclient 2025-03-26 16:51:35.243786 | orchestrator | 2025-03-26 16:51:35 | INFO  | D [0] - phpmyadmin 2025-03-26 16:51:35.243801 | orchestrator | 2025-03-26 16:51:35 | INFO  | A [0] - common 2025-03-26 16:51:35.243824 | orchestrator | 2025-03-26 16:51:35 | INFO  | A [1] -- loadbalancer 2025-03-26 16:51:35.243897 | orchestrator | 2025-03-26 16:51:35 | INFO  | D [2] --- opensearch 2025-03-26 16:51:35.243917 | orchestrator | 2025-03-26 16:51:35 | INFO  | A [2] --- mariadb-ng 2025-03-26 16:51:35.243932 | orchestrator | 2025-03-26 16:51:35 | INFO  | D [3] ---- horizon 2025-03-26 16:51:35.243952 | orchestrator | 2025-03-26 16:51:35 | INFO  | A [3] ---- keystone 2025-03-26 16:51:35.244493 | orchestrator | 2025-03-26 16:51:35 | INFO  | A [4] ----- neutron 2025-03-26 16:51:35.244593 | orchestrator | 2025-03-26 16:51:35 | INFO  | D [5] ------ wait-for-nova 2025-03-26 16:51:35.244637 | orchestrator | 2025-03-26 16:51:35 | INFO  | A [5] ------ octavia 2025-03-26 16:51:35.244723 | orchestrator | 2025-03-26 16:51:35 | INFO  | D [4] ----- barbican 2025-03-26 16:51:35.244806 | orchestrator | 2025-03-26 16:51:35 | INFO  | D [4] ----- designate 2025-03-26 16:51:35.244823 | orchestrator | 2025-03-26 16:51:35 | INFO  | D [4] ----- ironic 2025-03-26 16:51:35.244837 | orchestrator | 2025-03-26 16:51:35 | INFO  | D [4] ----- placement 2025-03-26 16:51:35.244891 | orchestrator | 2025-03-26 16:51:35 | INFO  | D [4] ----- magnum 2025-03-26 16:51:35.244946 | orchestrator | 2025-03-26 16:51:35 | INFO  | A [1] -- openvswitch 2025-03-26 16:51:35.245072 | orchestrator | 2025-03-26 16:51:35 | INFO  | D [2] --- ovn 2025-03-26 16:51:35.245096 | orchestrator | 2025-03-26 16:51:35 | INFO  | D [1] -- memcached 2025-03-26 16:51:35.245187 | orchestrator | 2025-03-26 16:51:35 | INFO  | D [1] -- redis 2025-03-26 16:51:35.245255 | orchestrator | 2025-03-26 16:51:35 | INFO  | D [1] -- rabbitmq-ng 2025-03-26 16:51:35.245276 | orchestrator | 2025-03-26 16:51:35 | INFO  | A [0] - kubernetes 2025-03-26 16:51:35.245476 | orchestrator | 2025-03-26 16:51:35 | INFO  | D [1] -- kubeconfig 2025-03-26 16:51:35.245503 | orchestrator | 2025-03-26 16:51:35 | INFO  | A [1] -- copy-kubeconfig 2025-03-26 16:51:35.245522 | orchestrator | 2025-03-26 16:51:35 | INFO  | A [0] - ceph 2025-03-26 16:51:35.247050 | orchestrator | 2025-03-26 16:51:35 | INFO  | A [1] -- ceph-pools 2025-03-26 16:51:35.247229 | orchestrator | 2025-03-26 16:51:35 | INFO  | A [2] --- copy-ceph-keys 2025-03-26 16:51:35.247285 | orchestrator | 2025-03-26 16:51:35 | INFO  | A [3] ---- cephclient 2025-03-26 16:51:35.247348 | orchestrator | 2025-03-26 16:51:35 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-03-26 16:51:35.247404 | orchestrator | 2025-03-26 16:51:35 | INFO  | A [4] ----- wait-for-keystone 2025-03-26 16:51:35.247540 | orchestrator | 2025-03-26 16:51:35 | INFO  | D [5] ------ kolla-ceph-rgw 2025-03-26 16:51:35.247571 | orchestrator | 2025-03-26 16:51:35 | INFO  | D [5] ------ glance 2025-03-26 16:51:35.247601 | orchestrator | 2025-03-26 16:51:35 | INFO  | D [5] ------ cinder 2025-03-26 16:51:35.247877 | orchestrator | 2025-03-26 16:51:35 | INFO  | D [5] ------ nova 2025-03-26 16:51:35.247913 | orchestrator | 2025-03-26 16:51:35 | INFO  | A [4] ----- prometheus 2025-03-26 16:51:35.467976 | orchestrator | 2025-03-26 16:51:35 | INFO  | D [5] ------ grafana 2025-03-26 16:51:35.468064 | orchestrator | 2025-03-26 16:51:35 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-03-26 16:51:37.797858 | orchestrator | 2025-03-26 16:51:35 | INFO  | Tasks are running in the background 2025-03-26 16:51:37.797991 | orchestrator | 2025-03-26 16:51:37 | INFO  | No task IDs specified, wait for all currently running tasks 2025-03-26 16:51:39.915445 | orchestrator | 2025-03-26 16:51:39 | INFO  | Task e3358b67-27a4-49c9-bd23-344f21a8e410 is in state STARTED 2025-03-26 16:51:39.918165 | orchestrator | 2025-03-26 16:51:39 | INFO  | Task b1d8d300-853e-40c7-b129-5a6131ff19f0 is in state STARTED 2025-03-26 16:51:39.918230 | orchestrator | 2025-03-26 16:51:39 | INFO  | Task a37a6f1b-87ca-46f2-b809-042de022e2f6 is in state STARTED 2025-03-26 16:51:39.918258 | orchestrator | 2025-03-26 16:51:39 | INFO  | Task 57b7d322-6b44-4da2-9572-b6640d320ab3 is in state STARTED 2025-03-26 16:51:42.965025 | orchestrator | 2025-03-26 16:51:39 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:51:42.965136 | orchestrator | 2025-03-26 16:51:39 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:51:42.965188 | orchestrator | 2025-03-26 16:51:39 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:51:42.965223 | orchestrator | 2025-03-26 16:51:42 | INFO  | Task e3358b67-27a4-49c9-bd23-344f21a8e410 is in state STARTED 2025-03-26 16:51:42.966509 | orchestrator | 2025-03-26 16:51:42 | INFO  | Task b1d8d300-853e-40c7-b129-5a6131ff19f0 is in state STARTED 2025-03-26 16:51:42.967904 | orchestrator | 2025-03-26 16:51:42 | INFO  | Task a37a6f1b-87ca-46f2-b809-042de022e2f6 is in state STARTED 2025-03-26 16:51:42.968916 | orchestrator | 2025-03-26 16:51:42 | INFO  | Task 57b7d322-6b44-4da2-9572-b6640d320ab3 is in state STARTED 2025-03-26 16:51:42.974191 | orchestrator | 2025-03-26 16:51:42 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:51:46.046840 | orchestrator | 2025-03-26 16:51:42 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:51:46.046965 | orchestrator | 2025-03-26 16:51:42 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:51:46.047001 | orchestrator | 2025-03-26 16:51:46 | INFO  | Task e3358b67-27a4-49c9-bd23-344f21a8e410 is in state STARTED 2025-03-26 16:51:46.047426 | orchestrator | 2025-03-26 16:51:46 | INFO  | Task b1d8d300-853e-40c7-b129-5a6131ff19f0 is in state STARTED 2025-03-26 16:51:46.048075 | orchestrator | 2025-03-26 16:51:46 | INFO  | Task a37a6f1b-87ca-46f2-b809-042de022e2f6 is in state STARTED 2025-03-26 16:51:46.049618 | orchestrator | 2025-03-26 16:51:46 | INFO  | Task 57b7d322-6b44-4da2-9572-b6640d320ab3 is in state STARTED 2025-03-26 16:51:46.050551 | orchestrator | 2025-03-26 16:51:46 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:51:46.051824 | orchestrator | 2025-03-26 16:51:46 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:51:49.135200 | orchestrator | 2025-03-26 16:51:46 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:51:49.135329 | orchestrator | 2025-03-26 16:51:49 | INFO  | Task e3358b67-27a4-49c9-bd23-344f21a8e410 is in state STARTED 2025-03-26 16:51:49.137213 | orchestrator | 2025-03-26 16:51:49 | INFO  | Task b1d8d300-853e-40c7-b129-5a6131ff19f0 is in state STARTED 2025-03-26 16:51:49.141985 | orchestrator | 2025-03-26 16:51:49 | INFO  | Task a37a6f1b-87ca-46f2-b809-042de022e2f6 is in state STARTED 2025-03-26 16:51:49.143019 | orchestrator | 2025-03-26 16:51:49 | INFO  | Task 57b7d322-6b44-4da2-9572-b6640d320ab3 is in state STARTED 2025-03-26 16:51:49.148691 | orchestrator | 2025-03-26 16:51:49 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:51:49.151432 | orchestrator | 2025-03-26 16:51:49 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:51:52.262869 | orchestrator | 2025-03-26 16:51:49 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:51:52.263003 | orchestrator | 2025-03-26 16:51:52 | INFO  | Task e3358b67-27a4-49c9-bd23-344f21a8e410 is in state STARTED 2025-03-26 16:51:52.269165 | orchestrator | 2025-03-26 16:51:52 | INFO  | Task b1d8d300-853e-40c7-b129-5a6131ff19f0 is in state STARTED 2025-03-26 16:51:52.269202 | orchestrator | 2025-03-26 16:51:52 | INFO  | Task a37a6f1b-87ca-46f2-b809-042de022e2f6 is in state STARTED 2025-03-26 16:51:52.270118 | orchestrator | 2025-03-26 16:51:52 | INFO  | Task 57b7d322-6b44-4da2-9572-b6640d320ab3 is in state STARTED 2025-03-26 16:51:52.270149 | orchestrator | 2025-03-26 16:51:52 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:51:52.271878 | orchestrator | 2025-03-26 16:51:52 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:51:55.343412 | orchestrator | 2025-03-26 16:51:52 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:51:55.343614 | orchestrator | 2025-03-26 16:51:55 | INFO  | Task e3358b67-27a4-49c9-bd23-344f21a8e410 is in state STARTED 2025-03-26 16:51:55.346963 | orchestrator | 2025-03-26 16:51:55 | INFO  | Task b1d8d300-853e-40c7-b129-5a6131ff19f0 is in state STARTED 2025-03-26 16:51:55.347002 | orchestrator | 2025-03-26 16:51:55 | INFO  | Task a37a6f1b-87ca-46f2-b809-042de022e2f6 is in state STARTED 2025-03-26 16:51:55.350450 | orchestrator | 2025-03-26 16:51:55 | INFO  | Task 57b7d322-6b44-4da2-9572-b6640d320ab3 is in state STARTED 2025-03-26 16:51:55.350539 | orchestrator | 2025-03-26 16:51:55 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:51:55.351828 | orchestrator | 2025-03-26 16:51:55 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:51:58.444554 | orchestrator | 2025-03-26 16:51:55 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:51:58.444691 | orchestrator | 2025-03-26 16:51:58 | INFO  | Task e3358b67-27a4-49c9-bd23-344f21a8e410 is in state STARTED 2025-03-26 16:51:58.448425 | orchestrator | 2025-03-26 16:51:58 | INFO  | Task b1d8d300-853e-40c7-b129-5a6131ff19f0 is in state STARTED 2025-03-26 16:51:58.448464 | orchestrator | 2025-03-26 16:51:58 | INFO  | Task a37a6f1b-87ca-46f2-b809-042de022e2f6 is in state STARTED 2025-03-26 16:51:58.449010 | orchestrator | 2025-03-26 16:51:58 | INFO  | Task 57b7d322-6b44-4da2-9572-b6640d320ab3 is in state STARTED 2025-03-26 16:51:58.450147 | orchestrator | 2025-03-26 16:51:58 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:51:58.452153 | orchestrator | 2025-03-26 16:51:58 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:52:01.525666 | orchestrator | 2025-03-26 16:51:58 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:52:01.525817 | orchestrator | 2025-03-26 16:52:01 | INFO  | Task e3358b67-27a4-49c9-bd23-344f21a8e410 is in state STARTED 2025-03-26 16:52:01.532337 | orchestrator | 2025-03-26 16:52:01 | INFO  | Task b1d8d300-853e-40c7-b129-5a6131ff19f0 is in state STARTED 2025-03-26 16:52:01.539211 | orchestrator | 2025-03-26 16:52:01 | INFO  | Task a37a6f1b-87ca-46f2-b809-042de022e2f6 is in state STARTED 2025-03-26 16:52:01.543964 | orchestrator | 2025-03-26 16:52:01 | INFO  | Task 57b7d322-6b44-4da2-9572-b6640d320ab3 is in state STARTED 2025-03-26 16:52:01.548082 | orchestrator | 2025-03-26 16:52:01 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:52:04.613739 | orchestrator | 2025-03-26 16:52:01 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:52:04.613861 | orchestrator | 2025-03-26 16:52:01 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:52:04.613897 | orchestrator | 2025-03-26 16:52:04 | INFO  | Task e3358b67-27a4-49c9-bd23-344f21a8e410 is in state STARTED 2025-03-26 16:52:04.614765 | orchestrator | 2025-03-26 16:52:04 | INFO  | Task b1d8d300-853e-40c7-b129-5a6131ff19f0 is in state STARTED 2025-03-26 16:52:04.614824 | orchestrator | 2025-03-26 16:52:04 | INFO  | Task a37a6f1b-87ca-46f2-b809-042de022e2f6 is in state STARTED 2025-03-26 16:52:04.615726 | orchestrator | 2025-03-26 16:52:04 | INFO  | Task 57b7d322-6b44-4da2-9572-b6640d320ab3 is in state STARTED 2025-03-26 16:52:04.616208 | orchestrator | 2025-03-26 16:52:04 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:52:04.618107 | orchestrator | 2025-03-26 16:52:04 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:52:07.695390 | orchestrator | 2025-03-26 16:52:04 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:52:07.695575 | orchestrator | 2025-03-26 16:52:07 | INFO  | Task e3358b67-27a4-49c9-bd23-344f21a8e410 is in state STARTED 2025-03-26 16:52:07.696394 | orchestrator | 2025-03-26 16:52:07 | INFO  | Task b1d8d300-853e-40c7-b129-5a6131ff19f0 is in state SUCCESS 2025-03-26 16:52:07.696437 | orchestrator | 2025-03-26 16:52:07 | INFO  | Task a37a6f1b-87ca-46f2-b809-042de022e2f6 is in state STARTED 2025-03-26 16:52:07.696463 | orchestrator | 2025-03-26 16:52:07.696479 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-03-26 16:52:07.696494 | orchestrator | 2025-03-26 16:52:07.696508 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-03-26 16:52:07.696552 | orchestrator | Wednesday 26 March 2025 16:51:46 +0000 (0:00:00.435) 0:00:00.435 ******* 2025-03-26 16:52:07.696567 | orchestrator | changed: [testbed-node-0] 2025-03-26 16:52:07.696583 | orchestrator | changed: [testbed-node-1] 2025-03-26 16:52:07.696598 | orchestrator | changed: [testbed-manager] 2025-03-26 16:52:07.696612 | orchestrator | changed: [testbed-node-2] 2025-03-26 16:52:07.696627 | orchestrator | changed: [testbed-node-3] 2025-03-26 16:52:07.696641 | orchestrator | changed: [testbed-node-4] 2025-03-26 16:52:07.696655 | orchestrator | changed: [testbed-node-5] 2025-03-26 16:52:07.696669 | orchestrator | 2025-03-26 16:52:07.696683 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-03-26 16:52:07.696698 | orchestrator | Wednesday 26 March 2025 16:51:50 +0000 (0:00:04.681) 0:00:05.116 ******* 2025-03-26 16:52:07.696713 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-03-26 16:52:07.696727 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-03-26 16:52:07.696749 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-03-26 16:52:07.696763 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-03-26 16:52:07.696777 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-03-26 16:52:07.696791 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-03-26 16:52:07.696805 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-03-26 16:52:07.696820 | orchestrator | 2025-03-26 16:52:07.696834 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-03-26 16:52:07.696848 | orchestrator | Wednesday 26 March 2025 16:51:54 +0000 (0:00:03.833) 0:00:08.950 ******* 2025-03-26 16:52:07.696866 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-03-26 16:51:52.213541', 'end': '2025-03-26 16:51:52.222971', 'delta': '0:00:00.009430', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-03-26 16:52:07.696890 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-03-26 16:51:52.129020', 'end': '2025-03-26 16:51:52.137622', 'delta': '0:00:00.008602', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-03-26 16:52:07.696926 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-03-26 16:51:52.980368', 'end': '2025-03-26 16:51:52.986986', 'delta': '0:00:00.006618', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-03-26 16:52:07.696975 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-03-26 16:51:52.182546', 'end': '2025-03-26 16:51:52.193375', 'delta': '0:00:00.010829', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-03-26 16:52:07.696991 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-03-26 16:51:53.403610', 'end': '2025-03-26 16:51:53.410047', 'delta': '0:00:00.006437', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-03-26 16:52:07.697008 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-03-26 16:51:53.639746', 'end': '2025-03-26 16:51:53.648548', 'delta': '0:00:00.008802', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-03-26 16:52:07.697029 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-03-26 16:51:54.219546', 'end': '2025-03-26 16:51:54.232052', 'delta': '0:00:00.012506', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-03-26 16:52:07.697052 | orchestrator | 2025-03-26 16:52:07.697068 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-03-26 16:52:07.697084 | orchestrator | Wednesday 26 March 2025 16:51:58 +0000 (0:00:04.264) 0:00:13.214 ******* 2025-03-26 16:52:07.697100 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-03-26 16:52:07.697115 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-03-26 16:52:07.697130 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-03-26 16:52:07.697146 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-03-26 16:52:07.697161 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-03-26 16:52:07.697176 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-03-26 16:52:07.697192 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-03-26 16:52:07.697207 | orchestrator | 2025-03-26 16:52:07.697223 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-26 16:52:07.697239 | orchestrator | testbed-manager : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-26 16:52:07.697256 | orchestrator | testbed-node-0 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-26 16:52:07.697271 | orchestrator | testbed-node-1 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-26 16:52:07.697293 | orchestrator | testbed-node-2 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-26 16:52:07.697332 | orchestrator | testbed-node-3 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-26 16:52:07.697350 | orchestrator | testbed-node-4 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-26 16:52:07.697366 | orchestrator | testbed-node-5 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-26 16:52:07.697381 | orchestrator | 2025-03-26 16:52:07.697396 | orchestrator | Wednesday 26 March 2025 16:52:04 +0000 (0:00:05.576) 0:00:18.791 ******* 2025-03-26 16:52:07.697411 | orchestrator | =============================================================================== 2025-03-26 16:52:07.697426 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 5.58s 2025-03-26 16:52:07.697442 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.68s 2025-03-26 16:52:07.697457 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 4.26s 2025-03-26 16:52:07.697472 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 3.83s 2025-03-26 16:52:07.697492 | orchestrator | 2025-03-26 16:52:07 | INFO  | Task 93bf4cd7-b869-41fc-b389-6a4853376adb is in state STARTED 2025-03-26 16:52:07.700634 | orchestrator | 2025-03-26 16:52:07 | INFO  | Task 57b7d322-6b44-4da2-9572-b6640d320ab3 is in state STARTED 2025-03-26 16:52:07.702269 | orchestrator | 2025-03-26 16:52:07 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:52:07.702309 | orchestrator | 2025-03-26 16:52:07 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:52:07.704001 | orchestrator | 2025-03-26 16:52:07 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:52:10.775841 | orchestrator | 2025-03-26 16:52:10 | INFO  | Task e3358b67-27a4-49c9-bd23-344f21a8e410 is in state STARTED 2025-03-26 16:52:10.777689 | orchestrator | 2025-03-26 16:52:10 | INFO  | Task a37a6f1b-87ca-46f2-b809-042de022e2f6 is in state STARTED 2025-03-26 16:52:10.782956 | orchestrator | 2025-03-26 16:52:10 | INFO  | Task 93bf4cd7-b869-41fc-b389-6a4853376adb is in state STARTED 2025-03-26 16:52:10.787886 | orchestrator | 2025-03-26 16:52:10 | INFO  | Task 57b7d322-6b44-4da2-9572-b6640d320ab3 is in state STARTED 2025-03-26 16:52:10.793236 | orchestrator | 2025-03-26 16:52:10 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:52:10.801999 | orchestrator | 2025-03-26 16:52:10 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:52:13.921328 | orchestrator | 2025-03-26 16:52:10 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:52:13.921467 | orchestrator | 2025-03-26 16:52:13 | INFO  | Task e3358b67-27a4-49c9-bd23-344f21a8e410 is in state STARTED 2025-03-26 16:52:13.921788 | orchestrator | 2025-03-26 16:52:13 | INFO  | Task a37a6f1b-87ca-46f2-b809-042de022e2f6 is in state STARTED 2025-03-26 16:52:13.925133 | orchestrator | 2025-03-26 16:52:13 | INFO  | Task 93bf4cd7-b869-41fc-b389-6a4853376adb is in state STARTED 2025-03-26 16:52:13.931182 | orchestrator | 2025-03-26 16:52:13 | INFO  | Task 57b7d322-6b44-4da2-9572-b6640d320ab3 is in state STARTED 2025-03-26 16:52:13.932290 | orchestrator | 2025-03-26 16:52:13 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:52:13.939170 | orchestrator | 2025-03-26 16:52:13 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:52:17.046580 | orchestrator | 2025-03-26 16:52:13 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:52:17.046718 | orchestrator | 2025-03-26 16:52:17 | INFO  | Task e3358b67-27a4-49c9-bd23-344f21a8e410 is in state STARTED 2025-03-26 16:52:17.055463 | orchestrator | 2025-03-26 16:52:17 | INFO  | Task a37a6f1b-87ca-46f2-b809-042de022e2f6 is in state STARTED 2025-03-26 16:52:17.060933 | orchestrator | 2025-03-26 16:52:17 | INFO  | Task 93bf4cd7-b869-41fc-b389-6a4853376adb is in state STARTED 2025-03-26 16:52:17.067520 | orchestrator | 2025-03-26 16:52:17 | INFO  | Task 57b7d322-6b44-4da2-9572-b6640d320ab3 is in state STARTED 2025-03-26 16:52:17.067577 | orchestrator | 2025-03-26 16:52:17 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:52:17.077891 | orchestrator | 2025-03-26 16:52:17 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:52:20.152678 | orchestrator | 2025-03-26 16:52:17 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:52:20.152817 | orchestrator | 2025-03-26 16:52:20 | INFO  | Task e3358b67-27a4-49c9-bd23-344f21a8e410 is in state STARTED 2025-03-26 16:52:20.153083 | orchestrator | 2025-03-26 16:52:20 | INFO  | Task a37a6f1b-87ca-46f2-b809-042de022e2f6 is in state STARTED 2025-03-26 16:52:20.153109 | orchestrator | 2025-03-26 16:52:20 | INFO  | Task 93bf4cd7-b869-41fc-b389-6a4853376adb is in state STARTED 2025-03-26 16:52:20.153130 | orchestrator | 2025-03-26 16:52:20 | INFO  | Task 57b7d322-6b44-4da2-9572-b6640d320ab3 is in state STARTED 2025-03-26 16:52:20.158264 | orchestrator | 2025-03-26 16:52:20 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:52:20.158572 | orchestrator | 2025-03-26 16:52:20 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:52:23.261910 | orchestrator | 2025-03-26 16:52:20 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:52:23.262106 | orchestrator | 2025-03-26 16:52:23 | INFO  | Task e3358b67-27a4-49c9-bd23-344f21a8e410 is in state STARTED 2025-03-26 16:52:23.262657 | orchestrator | 2025-03-26 16:52:23 | INFO  | Task a37a6f1b-87ca-46f2-b809-042de022e2f6 is in state STARTED 2025-03-26 16:52:23.265254 | orchestrator | 2025-03-26 16:52:23 | INFO  | Task 93bf4cd7-b869-41fc-b389-6a4853376adb is in state STARTED 2025-03-26 16:52:23.266236 | orchestrator | 2025-03-26 16:52:23 | INFO  | Task 57b7d322-6b44-4da2-9572-b6640d320ab3 is in state STARTED 2025-03-26 16:52:23.266264 | orchestrator | 2025-03-26 16:52:23 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:52:23.266285 | orchestrator | 2025-03-26 16:52:23 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:52:26.359378 | orchestrator | 2025-03-26 16:52:23 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:52:26.359518 | orchestrator | 2025-03-26 16:52:26 | INFO  | Task e3358b67-27a4-49c9-bd23-344f21a8e410 is in state STARTED 2025-03-26 16:52:26.362690 | orchestrator | 2025-03-26 16:52:26 | INFO  | Task a37a6f1b-87ca-46f2-b809-042de022e2f6 is in state STARTED 2025-03-26 16:52:26.362729 | orchestrator | 2025-03-26 16:52:26 | INFO  | Task 93bf4cd7-b869-41fc-b389-6a4853376adb is in state STARTED 2025-03-26 16:52:26.367142 | orchestrator | 2025-03-26 16:52:26 | INFO  | Task 57b7d322-6b44-4da2-9572-b6640d320ab3 is in state STARTED 2025-03-26 16:52:26.373095 | orchestrator | 2025-03-26 16:52:26 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:52:26.373983 | orchestrator | 2025-03-26 16:52:26 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:52:26.374112 | orchestrator | 2025-03-26 16:52:26 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:52:29.456318 | orchestrator | 2025-03-26 16:52:29 | INFO  | Task e3358b67-27a4-49c9-bd23-344f21a8e410 is in state STARTED 2025-03-26 16:52:29.456861 | orchestrator | 2025-03-26 16:52:29 | INFO  | Task a37a6f1b-87ca-46f2-b809-042de022e2f6 is in state STARTED 2025-03-26 16:52:29.456886 | orchestrator | 2025-03-26 16:52:29 | INFO  | Task 93bf4cd7-b869-41fc-b389-6a4853376adb is in state STARTED 2025-03-26 16:52:29.456897 | orchestrator | 2025-03-26 16:52:29 | INFO  | Task 57b7d322-6b44-4da2-9572-b6640d320ab3 is in state STARTED 2025-03-26 16:52:29.456913 | orchestrator | 2025-03-26 16:52:29 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:52:29.460434 | orchestrator | 2025-03-26 16:52:29 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:52:32.534783 | orchestrator | 2025-03-26 16:52:29 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:52:32.534922 | orchestrator | 2025-03-26 16:52:32 | INFO  | Task e3358b67-27a4-49c9-bd23-344f21a8e410 is in state STARTED 2025-03-26 16:52:32.539836 | orchestrator | 2025-03-26 16:52:32 | INFO  | Task a37a6f1b-87ca-46f2-b809-042de022e2f6 is in state STARTED 2025-03-26 16:52:32.548305 | orchestrator | 2025-03-26 16:52:32 | INFO  | Task 93bf4cd7-b869-41fc-b389-6a4853376adb is in state STARTED 2025-03-26 16:52:32.555110 | orchestrator | 2025-03-26 16:52:32 | INFO  | Task 57b7d322-6b44-4da2-9572-b6640d320ab3 is in state SUCCESS 2025-03-26 16:52:32.563770 | orchestrator | 2025-03-26 16:52:32 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:52:32.582434 | orchestrator | 2025-03-26 16:52:32 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:52:35.685335 | orchestrator | 2025-03-26 16:52:32 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:52:35.685445 | orchestrator | 2025-03-26 16:52:32 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:52:35.685498 | orchestrator | 2025-03-26 16:52:35 | INFO  | Task e3358b67-27a4-49c9-bd23-344f21a8e410 is in state STARTED 2025-03-26 16:52:35.687778 | orchestrator | 2025-03-26 16:52:35 | INFO  | Task a37a6f1b-87ca-46f2-b809-042de022e2f6 is in state STARTED 2025-03-26 16:52:35.688738 | orchestrator | 2025-03-26 16:52:35 | INFO  | Task 93bf4cd7-b869-41fc-b389-6a4853376adb is in state STARTED 2025-03-26 16:52:35.688760 | orchestrator | 2025-03-26 16:52:35 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:52:35.690370 | orchestrator | 2025-03-26 16:52:35 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:52:35.692047 | orchestrator | 2025-03-26 16:52:35 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:52:35.692100 | orchestrator | 2025-03-26 16:52:35 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:52:38.821998 | orchestrator | 2025-03-26 16:52:38 | INFO  | Task e3358b67-27a4-49c9-bd23-344f21a8e410 is in state STARTED 2025-03-26 16:52:38.822722 | orchestrator | 2025-03-26 16:52:38 | INFO  | Task a37a6f1b-87ca-46f2-b809-042de022e2f6 is in state STARTED 2025-03-26 16:52:38.827512 | orchestrator | 2025-03-26 16:52:38 | INFO  | Task 93bf4cd7-b869-41fc-b389-6a4853376adb is in state STARTED 2025-03-26 16:52:38.830377 | orchestrator | 2025-03-26 16:52:38 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:52:38.835682 | orchestrator | 2025-03-26 16:52:38 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:52:38.840154 | orchestrator | 2025-03-26 16:52:38 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:52:41.936712 | orchestrator | 2025-03-26 16:52:38 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:52:41.936878 | orchestrator | 2025-03-26 16:52:41 | INFO  | Task e3358b67-27a4-49c9-bd23-344f21a8e410 is in state STARTED 2025-03-26 16:52:41.940686 | orchestrator | 2025-03-26 16:52:41 | INFO  | Task a37a6f1b-87ca-46f2-b809-042de022e2f6 is in state STARTED 2025-03-26 16:52:41.942529 | orchestrator | 2025-03-26 16:52:41 | INFO  | Task 93bf4cd7-b869-41fc-b389-6a4853376adb is in state STARTED 2025-03-26 16:52:41.950458 | orchestrator | 2025-03-26 16:52:41 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:52:41.951602 | orchestrator | 2025-03-26 16:52:41 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:52:41.954831 | orchestrator | 2025-03-26 16:52:41 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:52:45.041958 | orchestrator | 2025-03-26 16:52:41 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:52:45.042153 | orchestrator | 2025-03-26 16:52:45 | INFO  | Task e3358b67-27a4-49c9-bd23-344f21a8e410 is in state STARTED 2025-03-26 16:52:45.047753 | orchestrator | 2025-03-26 16:52:45 | INFO  | Task a37a6f1b-87ca-46f2-b809-042de022e2f6 is in state STARTED 2025-03-26 16:52:45.051668 | orchestrator | 2025-03-26 16:52:45 | INFO  | Task 93bf4cd7-b869-41fc-b389-6a4853376adb is in state STARTED 2025-03-26 16:52:45.057054 | orchestrator | 2025-03-26 16:52:45 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:52:45.064062 | orchestrator | 2025-03-26 16:52:45 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:52:45.067051 | orchestrator | 2025-03-26 16:52:45 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:52:48.128403 | orchestrator | 2025-03-26 16:52:45 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:52:48.128633 | orchestrator | 2025-03-26 16:52:48 | INFO  | Task e3358b67-27a4-49c9-bd23-344f21a8e410 is in state STARTED 2025-03-26 16:52:48.135360 | orchestrator | 2025-03-26 16:52:48 | INFO  | Task a37a6f1b-87ca-46f2-b809-042de022e2f6 is in state SUCCESS 2025-03-26 16:52:48.135411 | orchestrator | 2025-03-26 16:52:48 | INFO  | Task 93bf4cd7-b869-41fc-b389-6a4853376adb is in state STARTED 2025-03-26 16:52:48.135427 | orchestrator | 2025-03-26 16:52:48 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:52:48.135452 | orchestrator | 2025-03-26 16:52:48 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:52:51.215208 | orchestrator | 2025-03-26 16:52:48 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:52:51.215305 | orchestrator | 2025-03-26 16:52:48 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:52:51.215332 | orchestrator | 2025-03-26 16:52:51 | INFO  | Task e3358b67-27a4-49c9-bd23-344f21a8e410 is in state STARTED 2025-03-26 16:52:51.218180 | orchestrator | 2025-03-26 16:52:51 | INFO  | Task 93bf4cd7-b869-41fc-b389-6a4853376adb is in state STARTED 2025-03-26 16:52:51.223561 | orchestrator | 2025-03-26 16:52:51 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:52:51.229692 | orchestrator | 2025-03-26 16:52:51 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:52:51.232766 | orchestrator | 2025-03-26 16:52:51 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:52:54.325215 | orchestrator | 2025-03-26 16:52:51 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:52:54.325355 | orchestrator | 2025-03-26 16:52:54 | INFO  | Task e3358b67-27a4-49c9-bd23-344f21a8e410 is in state STARTED 2025-03-26 16:52:54.327335 | orchestrator | 2025-03-26 16:52:54 | INFO  | Task 93bf4cd7-b869-41fc-b389-6a4853376adb is in state STARTED 2025-03-26 16:52:54.331488 | orchestrator | 2025-03-26 16:52:54 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:52:54.332356 | orchestrator | 2025-03-26 16:52:54 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:52:54.338167 | orchestrator | 2025-03-26 16:52:54 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:52:57.513398 | orchestrator | 2025-03-26 16:52:54 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:52:57.513548 | orchestrator | 2025-03-26 16:52:57 | INFO  | Task e3358b67-27a4-49c9-bd23-344f21a8e410 is in state STARTED 2025-03-26 16:52:57.515767 | orchestrator | 2025-03-26 16:52:57 | INFO  | Task 93bf4cd7-b869-41fc-b389-6a4853376adb is in state STARTED 2025-03-26 16:52:57.517280 | orchestrator | 2025-03-26 16:52:57 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:52:57.517956 | orchestrator | 2025-03-26 16:52:57 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:52:57.517989 | orchestrator | 2025-03-26 16:52:57 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:52:57.521496 | orchestrator | 2025-03-26 16:52:57 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:53:00.610590 | orchestrator | 2025-03-26 16:53:00 | INFO  | Task e3358b67-27a4-49c9-bd23-344f21a8e410 is in state STARTED 2025-03-26 16:53:00.611540 | orchestrator | 2025-03-26 16:53:00 | INFO  | Task 93bf4cd7-b869-41fc-b389-6a4853376adb is in state STARTED 2025-03-26 16:53:00.613283 | orchestrator | 2025-03-26 16:53:00 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:53:00.614549 | orchestrator | 2025-03-26 16:53:00 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:53:00.617542 | orchestrator | 2025-03-26 16:53:00 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:53:03.697803 | orchestrator | 2025-03-26 16:53:00 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:53:03.697943 | orchestrator | 2025-03-26 16:53:03 | INFO  | Task e3358b67-27a4-49c9-bd23-344f21a8e410 is in state STARTED 2025-03-26 16:53:03.698446 | orchestrator | 2025-03-26 16:53:03 | INFO  | Task 93bf4cd7-b869-41fc-b389-6a4853376adb is in state STARTED 2025-03-26 16:53:03.698491 | orchestrator | 2025-03-26 16:53:03 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:53:03.701560 | orchestrator | 2025-03-26 16:53:03 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:53:03.702503 | orchestrator | 2025-03-26 16:53:03 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:53:06.769513 | orchestrator | 2025-03-26 16:53:03 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:53:06.769665 | orchestrator | 2025-03-26 16:53:06 | INFO  | Task e3358b67-27a4-49c9-bd23-344f21a8e410 is in state STARTED 2025-03-26 16:53:06.777203 | orchestrator | 2025-03-26 16:53:06 | INFO  | Task 93bf4cd7-b869-41fc-b389-6a4853376adb is in state STARTED 2025-03-26 16:53:06.777230 | orchestrator | 2025-03-26 16:53:06 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:53:06.777241 | orchestrator | 2025-03-26 16:53:06 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:53:06.777256 | orchestrator | 2025-03-26 16:53:06 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:53:09.855083 | orchestrator | 2025-03-26 16:53:06 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:53:09.855192 | orchestrator | 2025-03-26 16:53:09 | INFO  | Task e3358b67-27a4-49c9-bd23-344f21a8e410 is in state STARTED 2025-03-26 16:53:09.861910 | orchestrator | 2025-03-26 16:53:09 | INFO  | Task 93bf4cd7-b869-41fc-b389-6a4853376adb is in state STARTED 2025-03-26 16:53:09.869300 | orchestrator | 2025-03-26 16:53:09 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:53:09.875966 | orchestrator | 2025-03-26 16:53:09 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:53:09.876008 | orchestrator | 2025-03-26 16:53:09 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:53:12.952023 | orchestrator | 2025-03-26 16:53:09 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:53:12.952161 | orchestrator | 2025-03-26 16:53:12 | INFO  | Task e3358b67-27a4-49c9-bd23-344f21a8e410 is in state STARTED 2025-03-26 16:53:12.952378 | orchestrator | 2025-03-26 16:53:12 | INFO  | Task 93bf4cd7-b869-41fc-b389-6a4853376adb is in state STARTED 2025-03-26 16:53:12.955945 | orchestrator | 2025-03-26 16:53:12 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:53:12.956420 | orchestrator | 2025-03-26 16:53:12 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:53:12.957296 | orchestrator | 2025-03-26 16:53:12 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:53:16.044427 | orchestrator | 2025-03-26 16:53:12 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:53:16.044569 | orchestrator | 2025-03-26 16:53:16 | INFO  | Task e3358b67-27a4-49c9-bd23-344f21a8e410 is in state STARTED 2025-03-26 16:53:16.045386 | orchestrator | 2025-03-26 16:53:16 | INFO  | Task 93bf4cd7-b869-41fc-b389-6a4853376adb is in state STARTED 2025-03-26 16:53:16.051374 | orchestrator | 2025-03-26 16:53:16 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:53:16.053026 | orchestrator | 2025-03-26 16:53:16 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:53:16.059391 | orchestrator | 2025-03-26 16:53:16 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:53:19.152889 | orchestrator | 2025-03-26 16:53:16 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:53:19.152976 | orchestrator | 2025-03-26 16:53:19 | INFO  | Task e3358b67-27a4-49c9-bd23-344f21a8e410 is in state STARTED 2025-03-26 16:53:19.153525 | orchestrator | 2025-03-26 16:53:19 | INFO  | Task 93bf4cd7-b869-41fc-b389-6a4853376adb is in state SUCCESS 2025-03-26 16:53:19.154387 | orchestrator | 2025-03-26 16:53:19.154421 | orchestrator | 2025-03-26 16:53:19.154436 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-03-26 16:53:19.154450 | orchestrator | 2025-03-26 16:53:19.154464 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-03-26 16:53:19.154479 | orchestrator | Wednesday 26 March 2025 16:51:46 +0000 (0:00:00.702) 0:00:00.702 ******* 2025-03-26 16:53:19.154493 | orchestrator | ok: [testbed-manager] => { 2025-03-26 16:53:19.154509 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-03-26 16:53:19.154525 | orchestrator | } 2025-03-26 16:53:19.154539 | orchestrator | 2025-03-26 16:53:19.154552 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-03-26 16:53:19.154566 | orchestrator | Wednesday 26 March 2025 16:51:47 +0000 (0:00:00.425) 0:00:01.127 ******* 2025-03-26 16:53:19.154580 | orchestrator | ok: [testbed-manager] 2025-03-26 16:53:19.154595 | orchestrator | 2025-03-26 16:53:19.154609 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-03-26 16:53:19.154644 | orchestrator | Wednesday 26 March 2025 16:51:49 +0000 (0:00:01.752) 0:00:02.880 ******* 2025-03-26 16:53:19.154659 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-03-26 16:53:19.154673 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-03-26 16:53:19.154687 | orchestrator | 2025-03-26 16:53:19.154701 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-03-26 16:53:19.154714 | orchestrator | Wednesday 26 March 2025 16:51:50 +0000 (0:00:01.685) 0:00:04.565 ******* 2025-03-26 16:53:19.154728 | orchestrator | changed: [testbed-manager] 2025-03-26 16:53:19.154742 | orchestrator | 2025-03-26 16:53:19.154756 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-03-26 16:53:19.154770 | orchestrator | Wednesday 26 March 2025 16:51:56 +0000 (0:00:06.032) 0:00:10.598 ******* 2025-03-26 16:53:19.154784 | orchestrator | changed: [testbed-manager] 2025-03-26 16:53:19.154798 | orchestrator | 2025-03-26 16:53:19.154812 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-03-26 16:53:19.154826 | orchestrator | Wednesday 26 March 2025 16:51:58 +0000 (0:00:01.993) 0:00:12.591 ******* 2025-03-26 16:53:19.154839 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-03-26 16:53:19.154854 | orchestrator | ok: [testbed-manager] 2025-03-26 16:53:19.154869 | orchestrator | 2025-03-26 16:53:19.154882 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-03-26 16:53:19.154896 | orchestrator | Wednesday 26 March 2025 16:52:26 +0000 (0:00:27.936) 0:00:40.528 ******* 2025-03-26 16:53:19.154910 | orchestrator | changed: [testbed-manager] 2025-03-26 16:53:19.154924 | orchestrator | 2025-03-26 16:53:19.154938 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-26 16:53:19.154952 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-26 16:53:19.154989 | orchestrator | 2025-03-26 16:53:19.155004 | orchestrator | Wednesday 26 March 2025 16:52:29 +0000 (0:00:03.222) 0:00:43.750 ******* 2025-03-26 16:53:19.155020 | orchestrator | =============================================================================== 2025-03-26 16:53:19.155035 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 27.94s 2025-03-26 16:53:19.155050 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 6.03s 2025-03-26 16:53:19.155064 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.22s 2025-03-26 16:53:19.155080 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.99s 2025-03-26 16:53:19.155095 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.75s 2025-03-26 16:53:19.155125 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.69s 2025-03-26 16:53:19.155141 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.43s 2025-03-26 16:53:19.155155 | orchestrator | 2025-03-26 16:53:19.155170 | orchestrator | 2025-03-26 16:53:19.155185 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-03-26 16:53:19.155200 | orchestrator | 2025-03-26 16:53:19.155215 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-03-26 16:53:19.155230 | orchestrator | Wednesday 26 March 2025 16:51:45 +0000 (0:00:00.715) 0:00:00.715 ******* 2025-03-26 16:53:19.155245 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-03-26 16:53:19.155262 | orchestrator | 2025-03-26 16:53:19.155277 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-03-26 16:53:19.155292 | orchestrator | Wednesday 26 March 2025 16:51:46 +0000 (0:00:00.854) 0:00:01.569 ******* 2025-03-26 16:53:19.155307 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-03-26 16:53:19.155322 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-03-26 16:53:19.155336 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-03-26 16:53:19.155352 | orchestrator | 2025-03-26 16:53:19.155366 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-03-26 16:53:19.155380 | orchestrator | Wednesday 26 March 2025 16:51:48 +0000 (0:00:02.195) 0:00:03.765 ******* 2025-03-26 16:53:19.155393 | orchestrator | changed: [testbed-manager] 2025-03-26 16:53:19.155408 | orchestrator | 2025-03-26 16:53:19.155421 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-03-26 16:53:19.155435 | orchestrator | Wednesday 26 March 2025 16:51:50 +0000 (0:00:02.513) 0:00:06.279 ******* 2025-03-26 16:53:19.155449 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-03-26 16:53:19.155463 | orchestrator | ok: [testbed-manager] 2025-03-26 16:53:19.155477 | orchestrator | 2025-03-26 16:53:19.155501 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-03-26 16:53:19.159142 | orchestrator | Wednesday 26 March 2025 16:52:33 +0000 (0:00:42.298) 0:00:48.577 ******* 2025-03-26 16:53:19.159174 | orchestrator | changed: [testbed-manager] 2025-03-26 16:53:19.159189 | orchestrator | 2025-03-26 16:53:19.159204 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-03-26 16:53:19.159218 | orchestrator | Wednesday 26 March 2025 16:52:35 +0000 (0:00:02.083) 0:00:50.661 ******* 2025-03-26 16:53:19.159233 | orchestrator | ok: [testbed-manager] 2025-03-26 16:53:19.159248 | orchestrator | 2025-03-26 16:53:19.159263 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-03-26 16:53:19.159277 | orchestrator | Wednesday 26 March 2025 16:52:36 +0000 (0:00:01.377) 0:00:52.038 ******* 2025-03-26 16:53:19.159292 | orchestrator | changed: [testbed-manager] 2025-03-26 16:53:19.159307 | orchestrator | 2025-03-26 16:53:19.159322 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-03-26 16:53:19.159350 | orchestrator | Wednesday 26 March 2025 16:52:40 +0000 (0:00:04.038) 0:00:56.077 ******* 2025-03-26 16:53:19.159365 | orchestrator | changed: [testbed-manager] 2025-03-26 16:53:19.159380 | orchestrator | 2025-03-26 16:53:19.159394 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-03-26 16:53:19.159409 | orchestrator | Wednesday 26 March 2025 16:52:42 +0000 (0:00:02.211) 0:00:58.288 ******* 2025-03-26 16:53:19.159423 | orchestrator | changed: [testbed-manager] 2025-03-26 16:53:19.159438 | orchestrator | 2025-03-26 16:53:19.159453 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-03-26 16:53:19.159467 | orchestrator | Wednesday 26 March 2025 16:52:44 +0000 (0:00:01.507) 0:00:59.795 ******* 2025-03-26 16:53:19.159482 | orchestrator | ok: [testbed-manager] 2025-03-26 16:53:19.159497 | orchestrator | 2025-03-26 16:53:19.159511 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-26 16:53:19.159526 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-26 16:53:19.159541 | orchestrator | 2025-03-26 16:53:19.159556 | orchestrator | Wednesday 26 March 2025 16:52:44 +0000 (0:00:00.564) 0:01:00.360 ******* 2025-03-26 16:53:19.159570 | orchestrator | =============================================================================== 2025-03-26 16:53:19.159585 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 42.30s 2025-03-26 16:53:19.159600 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 4.04s 2025-03-26 16:53:19.159614 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.51s 2025-03-26 16:53:19.159650 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 2.21s 2025-03-26 16:53:19.159665 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.20s 2025-03-26 16:53:19.159687 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 2.08s 2025-03-26 16:53:19.159701 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.51s 2025-03-26 16:53:19.159715 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.38s 2025-03-26 16:53:19.159729 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.85s 2025-03-26 16:53:19.159743 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.57s 2025-03-26 16:53:19.159757 | orchestrator | 2025-03-26 16:53:19.159778 | orchestrator | 2025-03-26 16:53:19 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:53:19.162096 | orchestrator | 2025-03-26 16:53:19 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:53:19.162135 | orchestrator | 2025-03-26 16:53:19 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:53:22.254739 | orchestrator | 2025-03-26 16:53:19 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:53:22.254872 | orchestrator | 2025-03-26 16:53:22 | INFO  | Task e3358b67-27a4-49c9-bd23-344f21a8e410 is in state STARTED 2025-03-26 16:53:22.274573 | orchestrator | 2025-03-26 16:53:22 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:53:22.274706 | orchestrator | 2025-03-26 16:53:22 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:53:25.349872 | orchestrator | 2025-03-26 16:53:22 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:53:25.350142 | orchestrator | 2025-03-26 16:53:22 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:53:25.350186 | orchestrator | 2025-03-26 16:53:25 | INFO  | Task e3358b67-27a4-49c9-bd23-344f21a8e410 is in state STARTED 2025-03-26 16:53:25.350276 | orchestrator | 2025-03-26 16:53:25 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:53:25.350331 | orchestrator | 2025-03-26 16:53:25 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:53:25.352126 | orchestrator | 2025-03-26 16:53:25 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:53:28.417487 | orchestrator | 2025-03-26 16:53:25 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:53:28.417697 | orchestrator | 2025-03-26 16:53:28 | INFO  | Task e3358b67-27a4-49c9-bd23-344f21a8e410 is in state SUCCESS 2025-03-26 16:53:28.417950 | orchestrator | 2025-03-26 16:53:28.417973 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-03-26 16:53:28.417988 | orchestrator | 2025-03-26 16:53:28.418003 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-03-26 16:53:28.418075 | orchestrator | Wednesday 26 March 2025 16:52:15 +0000 (0:00:00.691) 0:00:00.691 ******* 2025-03-26 16:53:28.418092 | orchestrator | ok: [testbed-manager] 2025-03-26 16:53:28.418108 | orchestrator | 2025-03-26 16:53:28.418122 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-03-26 16:53:28.418136 | orchestrator | Wednesday 26 March 2025 16:52:17 +0000 (0:00:02.334) 0:00:03.026 ******* 2025-03-26 16:53:28.418151 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-03-26 16:53:28.418165 | orchestrator | 2025-03-26 16:53:28.418179 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-03-26 16:53:28.418193 | orchestrator | Wednesday 26 March 2025 16:52:18 +0000 (0:00:00.942) 0:00:03.969 ******* 2025-03-26 16:53:28.418207 | orchestrator | changed: [testbed-manager] 2025-03-26 16:53:28.418221 | orchestrator | 2025-03-26 16:53:28.418236 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-03-26 16:53:28.418249 | orchestrator | Wednesday 26 March 2025 16:52:21 +0000 (0:00:02.092) 0:00:06.062 ******* 2025-03-26 16:53:28.418263 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-03-26 16:53:28.418285 | orchestrator | ok: [testbed-manager] 2025-03-26 16:53:28.418300 | orchestrator | 2025-03-26 16:53:28.418314 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-03-26 16:53:28.418328 | orchestrator | Wednesday 26 March 2025 16:53:10 +0000 (0:00:49.777) 0:00:55.839 ******* 2025-03-26 16:53:28.418342 | orchestrator | changed: [testbed-manager] 2025-03-26 16:53:28.418356 | orchestrator | 2025-03-26 16:53:28.418370 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-26 16:53:28.418385 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-26 16:53:28.418400 | orchestrator | 2025-03-26 16:53:28.418415 | orchestrator | Wednesday 26 March 2025 16:53:14 +0000 (0:00:04.140) 0:00:59.980 ******* 2025-03-26 16:53:28.418429 | orchestrator | =============================================================================== 2025-03-26 16:53:28.418443 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 49.78s 2025-03-26 16:53:28.418457 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 4.14s 2025-03-26 16:53:28.418471 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 2.34s 2025-03-26 16:53:28.418485 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 2.09s 2025-03-26 16:53:28.418499 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.94s 2025-03-26 16:53:28.418513 | orchestrator | 2025-03-26 16:53:28.418527 | orchestrator | 2025-03-26 16:53:28.418541 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-26 16:53:28.418555 | orchestrator | 2025-03-26 16:53:28.418569 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-26 16:53:28.418584 | orchestrator | Wednesday 26 March 2025 16:51:45 +0000 (0:00:00.710) 0:00:00.710 ******* 2025-03-26 16:53:28.418598 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-03-26 16:53:28.418718 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-03-26 16:53:28.418778 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-03-26 16:53:28.418795 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-03-26 16:53:28.418809 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-03-26 16:53:28.418823 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-03-26 16:53:28.418837 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-03-26 16:53:28.418850 | orchestrator | 2025-03-26 16:53:28.418865 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-03-26 16:53:28.418879 | orchestrator | 2025-03-26 16:53:28.418893 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-03-26 16:53:28.418907 | orchestrator | Wednesday 26 March 2025 16:51:48 +0000 (0:00:03.208) 0:00:03.918 ******* 2025-03-26 16:53:28.418936 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-26 16:53:28.418953 | orchestrator | 2025-03-26 16:53:28.418972 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-03-26 16:53:28.418987 | orchestrator | Wednesday 26 March 2025 16:51:53 +0000 (0:00:04.440) 0:00:08.359 ******* 2025-03-26 16:53:28.419000 | orchestrator | ok: [testbed-node-1] 2025-03-26 16:53:28.419015 | orchestrator | ok: [testbed-node-0] 2025-03-26 16:53:28.419029 | orchestrator | ok: [testbed-node-2] 2025-03-26 16:53:28.419043 | orchestrator | ok: [testbed-manager] 2025-03-26 16:53:28.419057 | orchestrator | ok: [testbed-node-3] 2025-03-26 16:53:28.419071 | orchestrator | ok: [testbed-node-4] 2025-03-26 16:53:28.419085 | orchestrator | ok: [testbed-node-5] 2025-03-26 16:53:28.419099 | orchestrator | 2025-03-26 16:53:28.419113 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-03-26 16:53:28.419128 | orchestrator | Wednesday 26 March 2025 16:51:57 +0000 (0:00:04.181) 0:00:12.541 ******* 2025-03-26 16:53:28.419141 | orchestrator | ok: [testbed-node-0] 2025-03-26 16:53:28.419155 | orchestrator | ok: [testbed-node-1] 2025-03-26 16:53:28.419169 | orchestrator | ok: [testbed-manager] 2025-03-26 16:53:28.419183 | orchestrator | ok: [testbed-node-2] 2025-03-26 16:53:28.419202 | orchestrator | ok: [testbed-node-3] 2025-03-26 16:53:28.419216 | orchestrator | ok: [testbed-node-4] 2025-03-26 16:53:28.419230 | orchestrator | ok: [testbed-node-5] 2025-03-26 16:53:28.419244 | orchestrator | 2025-03-26 16:53:28.419258 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-03-26 16:53:28.419285 | orchestrator | Wednesday 26 March 2025 16:52:02 +0000 (0:00:05.282) 0:00:17.823 ******* 2025-03-26 16:53:28.419300 | orchestrator | changed: [testbed-node-0] 2025-03-26 16:53:28.419314 | orchestrator | changed: [testbed-node-1] 2025-03-26 16:53:28.419328 | orchestrator | changed: [testbed-manager] 2025-03-26 16:53:28.419342 | orchestrator | changed: [testbed-node-2] 2025-03-26 16:53:28.419356 | orchestrator | changed: [testbed-node-3] 2025-03-26 16:53:28.419369 | orchestrator | changed: [testbed-node-4] 2025-03-26 16:53:28.419383 | orchestrator | changed: [testbed-node-5] 2025-03-26 16:53:28.419397 | orchestrator | 2025-03-26 16:53:28.419411 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-03-26 16:53:28.419425 | orchestrator | Wednesday 26 March 2025 16:52:06 +0000 (0:00:03.735) 0:00:21.558 ******* 2025-03-26 16:53:28.419439 | orchestrator | changed: [testbed-node-1] 2025-03-26 16:53:28.419453 | orchestrator | changed: [testbed-node-0] 2025-03-26 16:53:28.419466 | orchestrator | changed: [testbed-node-3] 2025-03-26 16:53:28.419480 | orchestrator | changed: [testbed-node-2] 2025-03-26 16:53:28.419494 | orchestrator | changed: [testbed-node-4] 2025-03-26 16:53:28.419508 | orchestrator | changed: [testbed-node-5] 2025-03-26 16:53:28.419521 | orchestrator | changed: [testbed-manager] 2025-03-26 16:53:28.419535 | orchestrator | 2025-03-26 16:53:28.419549 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-03-26 16:53:28.419573 | orchestrator | Wednesday 26 March 2025 16:52:20 +0000 (0:00:13.848) 0:00:35.407 ******* 2025-03-26 16:53:28.419587 | orchestrator | changed: [testbed-node-0] 2025-03-26 16:53:28.419601 | orchestrator | changed: [testbed-node-2] 2025-03-26 16:53:28.419615 | orchestrator | changed: [testbed-node-1] 2025-03-26 16:53:28.419629 | orchestrator | changed: [testbed-node-3] 2025-03-26 16:53:28.419665 | orchestrator | changed: [testbed-node-4] 2025-03-26 16:53:28.419679 | orchestrator | changed: [testbed-node-5] 2025-03-26 16:53:28.419693 | orchestrator | changed: [testbed-manager] 2025-03-26 16:53:28.419707 | orchestrator | 2025-03-26 16:53:28.419721 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-03-26 16:53:28.419735 | orchestrator | Wednesday 26 March 2025 16:52:41 +0000 (0:00:20.861) 0:00:56.269 ******* 2025-03-26 16:53:28.419794 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-26 16:53:28.419851 | orchestrator | 2025-03-26 16:53:28.419867 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-03-26 16:53:28.419881 | orchestrator | Wednesday 26 March 2025 16:52:47 +0000 (0:00:05.960) 0:01:02.229 ******* 2025-03-26 16:53:28.419895 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-03-26 16:53:28.419909 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-03-26 16:53:28.419924 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-03-26 16:53:28.419938 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-03-26 16:53:28.419952 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-03-26 16:53:28.419966 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-03-26 16:53:28.419980 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-03-26 16:53:28.419994 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-03-26 16:53:28.420008 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-03-26 16:53:28.420021 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-03-26 16:53:28.420035 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-03-26 16:53:28.420049 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-03-26 16:53:28.420063 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-03-26 16:53:28.420077 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-03-26 16:53:28.420090 | orchestrator | 2025-03-26 16:53:28.420105 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-03-26 16:53:28.420120 | orchestrator | Wednesday 26 March 2025 16:53:01 +0000 (0:00:13.910) 0:01:16.140 ******* 2025-03-26 16:53:28.420134 | orchestrator | ok: [testbed-manager] 2025-03-26 16:53:28.420149 | orchestrator | ok: [testbed-node-0] 2025-03-26 16:53:28.420163 | orchestrator | ok: [testbed-node-1] 2025-03-26 16:53:28.420177 | orchestrator | ok: [testbed-node-2] 2025-03-26 16:53:28.420191 | orchestrator | ok: [testbed-node-3] 2025-03-26 16:53:28.420205 | orchestrator | ok: [testbed-node-4] 2025-03-26 16:53:28.420219 | orchestrator | ok: [testbed-node-5] 2025-03-26 16:53:28.420233 | orchestrator | 2025-03-26 16:53:28.420248 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-03-26 16:53:28.420262 | orchestrator | Wednesday 26 March 2025 16:53:04 +0000 (0:00:03.325) 0:01:19.466 ******* 2025-03-26 16:53:28.420276 | orchestrator | changed: [testbed-node-0] 2025-03-26 16:53:28.420290 | orchestrator | changed: [testbed-manager] 2025-03-26 16:53:28.420304 | orchestrator | changed: [testbed-node-1] 2025-03-26 16:53:28.420318 | orchestrator | changed: [testbed-node-2] 2025-03-26 16:53:28.420332 | orchestrator | changed: [testbed-node-3] 2025-03-26 16:53:28.420346 | orchestrator | changed: [testbed-node-4] 2025-03-26 16:53:28.420360 | orchestrator | changed: [testbed-node-5] 2025-03-26 16:53:28.420374 | orchestrator | 2025-03-26 16:53:28.420397 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-03-26 16:53:28.420411 | orchestrator | Wednesday 26 March 2025 16:53:07 +0000 (0:00:03.505) 0:01:22.972 ******* 2025-03-26 16:53:28.420425 | orchestrator | ok: [testbed-node-1] 2025-03-26 16:53:28.420439 | orchestrator | ok: [testbed-node-2] 2025-03-26 16:53:28.420453 | orchestrator | ok: [testbed-node-0] 2025-03-26 16:53:28.420467 | orchestrator | ok: [testbed-manager] 2025-03-26 16:53:28.420480 | orchestrator | ok: [testbed-node-3] 2025-03-26 16:53:28.420494 | orchestrator | ok: [testbed-node-4] 2025-03-26 16:53:28.420508 | orchestrator | ok: [testbed-node-5] 2025-03-26 16:53:28.420522 | orchestrator | 2025-03-26 16:53:28.420536 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-03-26 16:53:28.420550 | orchestrator | Wednesday 26 March 2025 16:53:11 +0000 (0:00:03.484) 0:01:26.456 ******* 2025-03-26 16:53:28.420564 | orchestrator | ok: [testbed-node-1] 2025-03-26 16:53:28.420578 | orchestrator | ok: [testbed-node-0] 2025-03-26 16:53:28.420592 | orchestrator | ok: [testbed-manager] 2025-03-26 16:53:28.420606 | orchestrator | ok: [testbed-node-2] 2025-03-26 16:53:28.420620 | orchestrator | ok: [testbed-node-3] 2025-03-26 16:53:28.420694 | orchestrator | ok: [testbed-node-4] 2025-03-26 16:53:28.420804 | orchestrator | ok: [testbed-node-5] 2025-03-26 16:53:28.420823 | orchestrator | 2025-03-26 16:53:28.420838 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-03-26 16:53:28.420853 | orchestrator | Wednesday 26 March 2025 16:53:15 +0000 (0:00:04.434) 0:01:30.891 ******* 2025-03-26 16:53:28.420867 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-03-26 16:53:28.420884 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-26 16:53:28.420899 | orchestrator | 2025-03-26 16:53:28.420913 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-03-26 16:53:28.420927 | orchestrator | Wednesday 26 March 2025 16:53:19 +0000 (0:00:03.506) 0:01:34.398 ******* 2025-03-26 16:53:28.420987 | orchestrator | changed: [testbed-manager] 2025-03-26 16:53:28.421039 | orchestrator | 2025-03-26 16:53:28.421055 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-03-26 16:53:28.421070 | orchestrator | Wednesday 26 March 2025 16:53:23 +0000 (0:00:04.403) 0:01:38.801 ******* 2025-03-26 16:53:28.421085 | orchestrator | changed: [testbed-manager] 2025-03-26 16:53:28.421099 | orchestrator | changed: [testbed-node-0] 2025-03-26 16:53:28.421113 | orchestrator | changed: [testbed-node-1] 2025-03-26 16:53:28.421127 | orchestrator | changed: [testbed-node-3] 2025-03-26 16:53:28.421141 | orchestrator | changed: [testbed-node-2] 2025-03-26 16:53:28.421155 | orchestrator | changed: [testbed-node-5] 2025-03-26 16:53:28.421169 | orchestrator | changed: [testbed-node-4] 2025-03-26 16:53:28.421183 | orchestrator | 2025-03-26 16:53:28.421197 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-26 16:53:28.421274 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-26 16:53:28.421291 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-26 16:53:28.421314 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-26 16:53:28.421328 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-26 16:53:28.421342 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-26 16:53:28.421357 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-26 16:53:28.421380 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-26 16:53:28.421394 | orchestrator | 2025-03-26 16:53:28.421409 | orchestrator | Wednesday 26 March 2025 16:53:27 +0000 (0:00:03.526) 0:01:42.327 ******* 2025-03-26 16:53:28.421423 | orchestrator | =============================================================================== 2025-03-26 16:53:28.421437 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 20.86s 2025-03-26 16:53:28.421453 | orchestrator | osism.services.netdata : Copy configuration files ---------------------- 13.91s 2025-03-26 16:53:28.421479 | orchestrator | osism.services.netdata : Add repository -------------------------------- 13.85s 2025-03-26 16:53:28.421495 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 5.96s 2025-03-26 16:53:28.421510 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 5.28s 2025-03-26 16:53:28.421526 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 4.44s 2025-03-26 16:53:28.421541 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 4.44s 2025-03-26 16:53:28.421556 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 4.40s 2025-03-26 16:53:28.421571 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 4.18s 2025-03-26 16:53:28.421586 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 3.74s 2025-03-26 16:53:28.421611 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.53s 2025-03-26 16:53:28.421627 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 3.51s 2025-03-26 16:53:28.421701 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 3.50s 2025-03-26 16:53:28.421717 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 3.48s 2025-03-26 16:53:28.421733 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 3.33s 2025-03-26 16:53:28.421749 | orchestrator | Group hosts based on enabled services ----------------------------------- 3.21s 2025-03-26 16:53:28.421776 | orchestrator | 2025-03-26 16:53:28 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:53:28.421870 | orchestrator | 2025-03-26 16:53:28 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:53:28.421891 | orchestrator | 2025-03-26 16:53:28 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:53:31.507060 | orchestrator | 2025-03-26 16:53:28 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:53:31.507210 | orchestrator | 2025-03-26 16:53:31 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:53:31.515836 | orchestrator | 2025-03-26 16:53:31 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:53:31.520953 | orchestrator | 2025-03-26 16:53:31 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:53:34.569961 | orchestrator | 2025-03-26 16:53:31 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:53:34.570835 | orchestrator | 2025-03-26 16:53:34 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:53:34.571368 | orchestrator | 2025-03-26 16:53:34 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:53:34.571443 | orchestrator | 2025-03-26 16:53:34 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:53:34.571518 | orchestrator | 2025-03-26 16:53:34 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:53:37.673816 | orchestrator | 2025-03-26 16:53:37 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:53:37.683138 | orchestrator | 2025-03-26 16:53:37 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:53:37.690492 | orchestrator | 2025-03-26 16:53:37 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:53:40.756301 | orchestrator | 2025-03-26 16:53:37 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:53:40.756432 | orchestrator | 2025-03-26 16:53:40 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:53:40.762550 | orchestrator | 2025-03-26 16:53:40 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:53:40.766320 | orchestrator | 2025-03-26 16:53:40 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:53:43.820914 | orchestrator | 2025-03-26 16:53:40 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:53:43.821031 | orchestrator | 2025-03-26 16:53:43 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:53:43.822274 | orchestrator | 2025-03-26 16:53:43 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:53:43.823220 | orchestrator | 2025-03-26 16:53:43 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:53:43.826770 | orchestrator | 2025-03-26 16:53:43 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:53:46.880644 | orchestrator | 2025-03-26 16:53:46 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:53:46.880897 | orchestrator | 2025-03-26 16:53:46 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:53:46.885287 | orchestrator | 2025-03-26 16:53:46 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:53:49.954128 | orchestrator | 2025-03-26 16:53:46 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:53:49.954266 | orchestrator | 2025-03-26 16:53:49 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:53:49.955566 | orchestrator | 2025-03-26 16:53:49 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:53:49.955603 | orchestrator | 2025-03-26 16:53:49 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:53:53.025019 | orchestrator | 2025-03-26 16:53:49 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:53:53.025166 | orchestrator | 2025-03-26 16:53:53 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:53:53.026285 | orchestrator | 2025-03-26 16:53:53 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:53:53.032180 | orchestrator | 2025-03-26 16:53:53 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:53:53.032250 | orchestrator | 2025-03-26 16:53:53 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:53:56.110009 | orchestrator | 2025-03-26 16:53:56 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:53:56.114585 | orchestrator | 2025-03-26 16:53:56 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:53:56.118982 | orchestrator | 2025-03-26 16:53:56 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:53:59.194070 | orchestrator | 2025-03-26 16:53:56 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:53:59.194160 | orchestrator | 2025-03-26 16:53:59 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:53:59.196489 | orchestrator | 2025-03-26 16:53:59 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:53:59.196510 | orchestrator | 2025-03-26 16:53:59 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:54:02.264756 | orchestrator | 2025-03-26 16:53:59 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:54:02.264879 | orchestrator | 2025-03-26 16:54:02 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:54:02.265359 | orchestrator | 2025-03-26 16:54:02 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:54:02.266143 | orchestrator | 2025-03-26 16:54:02 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:54:05.311787 | orchestrator | 2025-03-26 16:54:02 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:54:05.311929 | orchestrator | 2025-03-26 16:54:05 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:54:05.312018 | orchestrator | 2025-03-26 16:54:05 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:54:05.312847 | orchestrator | 2025-03-26 16:54:05 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:54:08.356544 | orchestrator | 2025-03-26 16:54:05 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:54:08.356675 | orchestrator | 2025-03-26 16:54:08 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:54:08.357454 | orchestrator | 2025-03-26 16:54:08 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:54:08.358895 | orchestrator | 2025-03-26 16:54:08 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:54:11.409988 | orchestrator | 2025-03-26 16:54:08 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:54:11.410178 | orchestrator | 2025-03-26 16:54:11 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:54:11.412994 | orchestrator | 2025-03-26 16:54:11 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:54:11.413589 | orchestrator | 2025-03-26 16:54:11 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:54:14.465748 | orchestrator | 2025-03-26 16:54:11 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:54:14.465879 | orchestrator | 2025-03-26 16:54:14 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:54:14.465974 | orchestrator | 2025-03-26 16:54:14 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:54:14.468283 | orchestrator | 2025-03-26 16:54:14 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:54:17.535685 | orchestrator | 2025-03-26 16:54:14 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:54:17.535875 | orchestrator | 2025-03-26 16:54:17 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:54:17.536952 | orchestrator | 2025-03-26 16:54:17 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:54:17.542177 | orchestrator | 2025-03-26 16:54:17 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:54:20.611157 | orchestrator | 2025-03-26 16:54:17 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:54:20.611300 | orchestrator | 2025-03-26 16:54:20 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:54:20.616991 | orchestrator | 2025-03-26 16:54:20 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:54:23.683074 | orchestrator | 2025-03-26 16:54:20 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:54:23.683203 | orchestrator | 2025-03-26 16:54:20 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:54:23.683243 | orchestrator | 2025-03-26 16:54:23 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:54:26.732149 | orchestrator | 2025-03-26 16:54:23 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:54:26.732279 | orchestrator | 2025-03-26 16:54:23 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:54:26.732299 | orchestrator | 2025-03-26 16:54:23 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:54:26.732333 | orchestrator | 2025-03-26 16:54:26 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:54:26.732682 | orchestrator | 2025-03-26 16:54:26 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:54:26.732744 | orchestrator | 2025-03-26 16:54:26 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state STARTED 2025-03-26 16:54:29.779047 | orchestrator | 2025-03-26 16:54:26 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:54:29.779174 | orchestrator | 2025-03-26 16:54:29 | INFO  | Task e17ff8bb-5e1c-4d81-ac26-82f6c30fcfb9 is in state STARTED 2025-03-26 16:54:29.779890 | orchestrator | 2025-03-26 16:54:29 | INFO  | Task a48a6945-eabe-4f83-93f3-0c18da27b15c is in state STARTED 2025-03-26 16:54:29.781467 | orchestrator | 2025-03-26 16:54:29 | INFO  | Task 8d329125-911d-4a50-a2ce-8dd9e2334605 is in state STARTED 2025-03-26 16:54:29.786061 | orchestrator | 2025-03-26 16:54:29 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:54:29.788105 | orchestrator | 2025-03-26 16:54:29 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:54:29.793420 | orchestrator | 2025-03-26 16:54:29 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:54:29.798148 | orchestrator | 2025-03-26 16:54:29 | INFO  | Task 1b187e25-7b9f-4865-ac9c-4f01400d99c9 is in state SUCCESS 2025-03-26 16:54:29.801015 | orchestrator | 2025-03-26 16:54:29.801062 | orchestrator | 2025-03-26 16:54:29.801079 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-03-26 16:54:29.801102 | orchestrator | 2025-03-26 16:54:29.801117 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-03-26 16:54:29.801133 | orchestrator | Wednesday 26 March 2025 16:51:39 +0000 (0:00:00.412) 0:00:00.412 ******* 2025-03-26 16:54:29.801150 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-26 16:54:29.801167 | orchestrator | 2025-03-26 16:54:29.801181 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-03-26 16:54:29.801196 | orchestrator | Wednesday 26 March 2025 16:51:41 +0000 (0:00:02.069) 0:00:02.481 ******* 2025-03-26 16:54:29.801211 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-03-26 16:54:29.801226 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-03-26 16:54:29.801241 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-03-26 16:54:29.801256 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-03-26 16:54:29.801271 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-03-26 16:54:29.801286 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-03-26 16:54:29.801320 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-03-26 16:54:29.801336 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-03-26 16:54:29.801352 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-03-26 16:54:29.801367 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-03-26 16:54:29.801404 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-03-26 16:54:29.801419 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-03-26 16:54:29.801434 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-03-26 16:54:29.801449 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-03-26 16:54:29.801463 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-03-26 16:54:29.801478 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-03-26 16:54:29.801554 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-03-26 16:54:29.801572 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-03-26 16:54:29.801588 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-03-26 16:54:29.801605 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-03-26 16:54:29.801621 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-03-26 16:54:29.801637 | orchestrator | 2025-03-26 16:54:29.801658 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-03-26 16:54:29.801674 | orchestrator | Wednesday 26 March 2025 16:51:46 +0000 (0:00:05.514) 0:00:07.995 ******* 2025-03-26 16:54:29.801691 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-03-26 16:54:29.801747 | orchestrator | 2025-03-26 16:54:29.801764 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-03-26 16:54:29.801779 | orchestrator | Wednesday 26 March 2025 16:51:49 +0000 (0:00:02.917) 0:00:10.913 ******* 2025-03-26 16:54:29.801800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-26 16:54:29.801820 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-26 16:54:29.801850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-26 16:54:29.801877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-26 16:54:29.801893 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-26 16:54:29.801909 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-26 16:54:29.801924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.801940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.801955 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.801978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.802075 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.802096 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-26 16:54:29.802111 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.802131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.802149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.802164 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.802178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.802203 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.802229 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.802245 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.802260 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.802274 | orchestrator | 2025-03-26 16:54:29.802289 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-03-26 16:54:29.802304 | orchestrator | Wednesday 26 March 2025 16:51:58 +0000 (0:00:08.282) 0:00:19.195 ******* 2025-03-26 16:54:29.802319 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-26 16:54:29.802335 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 16:54:29.802355 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 16:54:29.802371 | orchestrator | skipping: [testbed-manager] 2025-03-26 16:54:29.802395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-26 16:54:29.802423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 16:54:29.802439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 16:54:29.802455 | orchestrator | skipping: [testbed-node-0] 2025-03-26 16:54:29.802470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-26 16:54:29.802485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 16:54:29.802501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 16:54:29.802516 | orchestrator | skipping: [testbed-node-1] 2025-03-26 16:54:29.802531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-26 16:54:29.802547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 16:54:29.802591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 16:54:29.802608 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-26 16:54:29.802624 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 16:54:29.802639 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 16:54:29.802655 | orchestrator | skipping: [testbed-node-2] 2025-03-26 16:54:29.802670 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:54:29.802685 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-26 16:54:29.802700 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 16:54:29.802769 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 16:54:29.802785 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:54:29.802806 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-26 16:54:29.802822 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 16:54:29.802837 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 16:54:29.802851 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:54:29.802866 | orchestrator | 2025-03-26 16:54:29.802880 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-03-26 16:54:29.802894 | orchestrator | Wednesday 26 March 2025 16:52:01 +0000 (0:00:03.361) 0:00:22.557 ******* 2025-03-26 16:54:29.802929 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-26 16:54:29.802945 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 16:54:29.802960 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 16:54:29.802983 | orchestrator | skipping: [testbed-manager] 2025-03-26 16:54:29.802997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-26 16:54:29.803019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 16:54:29.803039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 16:54:29.803054 | orchestrator | skipping: [testbed-node-0] 2025-03-26 16:54:29.803068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-26 16:54:29.803083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 16:54:29.803097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 16:54:29.803112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-26 16:54:29.803133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 16:54:29.803147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 16:54:29.803187 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-26 16:54:29.803202 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 16:54:29.803217 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 16:54:29.803231 | orchestrator | skipping: [testbed-node-1] 2025-03-26 16:54:29.803245 | orchestrator | skipping: [testbed-node-2] 2025-03-26 16:54:29.803259 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:54:29.803273 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-26 16:54:29.803287 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 16:54:29.803308 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-03-26 16:54:29.803323 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 16:54:29.803348 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 16:54:29.803362 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:54:29.803377 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 16:54:29.803391 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:54:29.803405 | orchestrator | 2025-03-26 16:54:29.803419 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-03-26 16:54:29.803433 | orchestrator | Wednesday 26 March 2025 16:52:05 +0000 (0:00:04.460) 0:00:27.017 ******* 2025-03-26 16:54:29.803447 | orchestrator | skipping: [testbed-manager] 2025-03-26 16:54:29.803461 | orchestrator | skipping: [testbed-node-0] 2025-03-26 16:54:29.803475 | orchestrator | skipping: [testbed-node-1] 2025-03-26 16:54:29.803489 | orchestrator | skipping: [testbed-node-2] 2025-03-26 16:54:29.803503 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:54:29.803517 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:54:29.803530 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:54:29.803544 | orchestrator | 2025-03-26 16:54:29.803558 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-03-26 16:54:29.803572 | orchestrator | Wednesday 26 March 2025 16:52:07 +0000 (0:00:02.069) 0:00:29.086 ******* 2025-03-26 16:54:29.803586 | orchestrator | skipping: [testbed-manager] 2025-03-26 16:54:29.803600 | orchestrator | skipping: [testbed-node-0] 2025-03-26 16:54:29.803614 | orchestrator | skipping: [testbed-node-1] 2025-03-26 16:54:29.803628 | orchestrator | skipping: [testbed-node-2] 2025-03-26 16:54:29.803641 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:54:29.803655 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:54:29.803669 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:54:29.803683 | orchestrator | 2025-03-26 16:54:29.803697 | orchestrator | TASK [common : Ensure fluentd image is present for label check] **************** 2025-03-26 16:54:29.803739 | orchestrator | Wednesday 26 March 2025 16:52:09 +0000 (0:00:01.435) 0:00:30.522 ******* 2025-03-26 16:54:29.803754 | orchestrator | ok: [testbed-node-0] 2025-03-26 16:54:29.803767 | orchestrator | changed: [testbed-node-1] 2025-03-26 16:54:29.803781 | orchestrator | changed: [testbed-node-2] 2025-03-26 16:54:29.803795 | orchestrator | changed: [testbed-node-4] 2025-03-26 16:54:29.803809 | orchestrator | changed: [testbed-node-3] 2025-03-26 16:54:29.803822 | orchestrator | changed: [testbed-node-5] 2025-03-26 16:54:29.803836 | orchestrator | changed: [testbed-manager] 2025-03-26 16:54:29.803850 | orchestrator | 2025-03-26 16:54:29.803864 | orchestrator | TASK [common : Fetch fluentd Docker image labels] ****************************** 2025-03-26 16:54:29.803878 | orchestrator | Wednesday 26 March 2025 16:52:40 +0000 (0:00:30.670) 0:01:01.192 ******* 2025-03-26 16:54:29.803891 | orchestrator | ok: [testbed-node-0] 2025-03-26 16:54:29.803905 | orchestrator | ok: [testbed-node-1] 2025-03-26 16:54:29.803919 | orchestrator | ok: [testbed-node-2] 2025-03-26 16:54:29.803933 | orchestrator | ok: [testbed-manager] 2025-03-26 16:54:29.803947 | orchestrator | ok: [testbed-node-3] 2025-03-26 16:54:29.803961 | orchestrator | ok: [testbed-node-4] 2025-03-26 16:54:29.803974 | orchestrator | ok: [testbed-node-5] 2025-03-26 16:54:29.803993 | orchestrator | 2025-03-26 16:54:29.804007 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-03-26 16:54:29.804021 | orchestrator | Wednesday 26 March 2025 16:52:45 +0000 (0:00:04.983) 0:01:06.176 ******* 2025-03-26 16:54:29.804035 | orchestrator | ok: [testbed-manager] 2025-03-26 16:54:29.804049 | orchestrator | ok: [testbed-node-0] 2025-03-26 16:54:29.804063 | orchestrator | ok: [testbed-node-1] 2025-03-26 16:54:29.804077 | orchestrator | ok: [testbed-node-2] 2025-03-26 16:54:29.804091 | orchestrator | ok: [testbed-node-3] 2025-03-26 16:54:29.804104 | orchestrator | ok: [testbed-node-4] 2025-03-26 16:54:29.804118 | orchestrator | ok: [testbed-node-5] 2025-03-26 16:54:29.804132 | orchestrator | 2025-03-26 16:54:29.804146 | orchestrator | TASK [common : Fetch fluentd Podman image labels] ****************************** 2025-03-26 16:54:29.804160 | orchestrator | Wednesday 26 March 2025 16:52:46 +0000 (0:00:01.817) 0:01:07.993 ******* 2025-03-26 16:54:29.804174 | orchestrator | skipping: [testbed-manager] 2025-03-26 16:54:29.804188 | orchestrator | skipping: [testbed-node-0] 2025-03-26 16:54:29.804202 | orchestrator | skipping: [testbed-node-1] 2025-03-26 16:54:29.804215 | orchestrator | skipping: [testbed-node-2] 2025-03-26 16:54:29.804229 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:54:29.804243 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:54:29.804257 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:54:29.804271 | orchestrator | 2025-03-26 16:54:29.804285 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-03-26 16:54:29.804298 | orchestrator | Wednesday 26 March 2025 16:52:48 +0000 (0:00:01.896) 0:01:09.889 ******* 2025-03-26 16:54:29.804312 | orchestrator | skipping: [testbed-manager] 2025-03-26 16:54:29.804326 | orchestrator | skipping: [testbed-node-0] 2025-03-26 16:54:29.804340 | orchestrator | skipping: [testbed-node-1] 2025-03-26 16:54:29.804353 | orchestrator | skipping: [testbed-node-2] 2025-03-26 16:54:29.804367 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:54:29.804381 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:54:29.804395 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:54:29.804409 | orchestrator | 2025-03-26 16:54:29.804423 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-03-26 16:54:29.804436 | orchestrator | Wednesday 26 March 2025 16:52:50 +0000 (0:00:02.135) 0:01:12.024 ******* 2025-03-26 16:54:29.804456 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-26 16:54:29.804483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-26 16:54:29.804499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-26 16:54:29.804514 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.804528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-26 16:54:29.804543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.804557 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-26 16:54:29.804577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.804599 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.804618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.804633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.804648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.804667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.804681 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-26 16:54:29.804696 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.804738 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-26 16:54:29.804760 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.804775 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.804789 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.804804 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.804818 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.804832 | orchestrator | 2025-03-26 16:54:29.804846 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-03-26 16:54:29.804860 | orchestrator | Wednesday 26 March 2025 16:53:01 +0000 (0:00:10.709) 0:01:22.734 ******* 2025-03-26 16:54:29.804874 | orchestrator | [WARNING]: Skipped 2025-03-26 16:54:29.804889 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-03-26 16:54:29.804902 | orchestrator | to this access issue: 2025-03-26 16:54:29.804916 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-03-26 16:54:29.804930 | orchestrator | directory 2025-03-26 16:54:29.804944 | orchestrator | ok: [testbed-manager -> localhost] 2025-03-26 16:54:29.804958 | orchestrator | 2025-03-26 16:54:29.804972 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-03-26 16:54:29.804986 | orchestrator | Wednesday 26 March 2025 16:53:03 +0000 (0:00:02.038) 0:01:24.772 ******* 2025-03-26 16:54:29.805005 | orchestrator | [WARNING]: Skipped 2025-03-26 16:54:29.805019 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-03-26 16:54:29.805033 | orchestrator | to this access issue: 2025-03-26 16:54:29.805047 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-03-26 16:54:29.805060 | orchestrator | directory 2025-03-26 16:54:29.805074 | orchestrator | ok: [testbed-manager -> localhost] 2025-03-26 16:54:29.805088 | orchestrator | 2025-03-26 16:54:29.805107 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-03-26 16:54:29.805122 | orchestrator | Wednesday 26 March 2025 16:53:04 +0000 (0:00:01.349) 0:01:26.121 ******* 2025-03-26 16:54:29.805135 | orchestrator | [WARNING]: Skipped 2025-03-26 16:54:29.805149 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-03-26 16:54:29.805163 | orchestrator | to this access issue: 2025-03-26 16:54:29.805177 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-03-26 16:54:29.805190 | orchestrator | directory 2025-03-26 16:54:29.805205 | orchestrator | ok: [testbed-manager -> localhost] 2025-03-26 16:54:29.805218 | orchestrator | 2025-03-26 16:54:29.805232 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-03-26 16:54:29.805251 | orchestrator | Wednesday 26 March 2025 16:53:06 +0000 (0:00:01.112) 0:01:27.234 ******* 2025-03-26 16:54:29.805266 | orchestrator | [WARNING]: Skipped 2025-03-26 16:54:29.805279 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-03-26 16:54:29.805293 | orchestrator | to this access issue: 2025-03-26 16:54:29.805307 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-03-26 16:54:29.805321 | orchestrator | directory 2025-03-26 16:54:29.805334 | orchestrator | ok: [testbed-manager -> localhost] 2025-03-26 16:54:29.805348 | orchestrator | 2025-03-26 16:54:29.805362 | orchestrator | TASK [common : Copying over td-agent.conf] ************************************* 2025-03-26 16:54:29.805375 | orchestrator | Wednesday 26 March 2025 16:53:07 +0000 (0:00:00.964) 0:01:28.198 ******* 2025-03-26 16:54:29.805389 | orchestrator | changed: [testbed-manager] 2025-03-26 16:54:29.805403 | orchestrator | changed: [testbed-node-0] 2025-03-26 16:54:29.805417 | orchestrator | changed: [testbed-node-1] 2025-03-26 16:54:29.805430 | orchestrator | changed: [testbed-node-2] 2025-03-26 16:54:29.805444 | orchestrator | changed: [testbed-node-3] 2025-03-26 16:54:29.805458 | orchestrator | changed: [testbed-node-4] 2025-03-26 16:54:29.805471 | orchestrator | changed: [testbed-node-5] 2025-03-26 16:54:29.805485 | orchestrator | 2025-03-26 16:54:29.805499 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-03-26 16:54:29.805512 | orchestrator | Wednesday 26 March 2025 16:53:15 +0000 (0:00:08.601) 0:01:36.800 ******* 2025-03-26 16:54:29.805526 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-03-26 16:54:29.805540 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-03-26 16:54:29.805554 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-03-26 16:54:29.805568 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-03-26 16:54:29.805582 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-03-26 16:54:29.805595 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-03-26 16:54:29.805609 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-03-26 16:54:29.805623 | orchestrator | 2025-03-26 16:54:29.805636 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-03-26 16:54:29.805650 | orchestrator | Wednesday 26 March 2025 16:53:20 +0000 (0:00:05.112) 0:01:41.913 ******* 2025-03-26 16:54:29.805671 | orchestrator | changed: [testbed-node-0] 2025-03-26 16:54:29.805685 | orchestrator | changed: [testbed-manager] 2025-03-26 16:54:29.805699 | orchestrator | changed: [testbed-node-1] 2025-03-26 16:54:29.805767 | orchestrator | changed: [testbed-node-3] 2025-03-26 16:54:29.805783 | orchestrator | changed: [testbed-node-2] 2025-03-26 16:54:29.805797 | orchestrator | changed: [testbed-node-4] 2025-03-26 16:54:29.805811 | orchestrator | changed: [testbed-node-5] 2025-03-26 16:54:29.805825 | orchestrator | 2025-03-26 16:54:29.805839 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-03-26 16:54:29.805853 | orchestrator | Wednesday 26 March 2025 16:53:25 +0000 (0:00:05.051) 0:01:46.964 ******* 2025-03-26 16:54:29.805872 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-26 16:54:29.805888 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 16:54:29.805902 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-26 16:54:29.805933 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.805951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 16:54:29.805964 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-26 16:54:29.805988 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.806001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 16:54:29.806050 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-26 16:54:29.806071 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 16:54:29.806092 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-26 16:54:29.806106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 16:54:29.806119 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.806139 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.806152 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-26 16:54:29.806165 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 16:54:29.806178 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-26 16:54:29.806195 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 16:54:29.806213 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.806230 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.806243 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.806262 | orchestrator | 2025-03-26 16:54:29.806274 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-03-26 16:54:29.806287 | orchestrator | Wednesday 26 March 2025 16:53:29 +0000 (0:00:03.244) 0:01:50.209 ******* 2025-03-26 16:54:29.806300 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-03-26 16:54:29.806312 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-03-26 16:54:29.806325 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-03-26 16:54:29.806337 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-03-26 16:54:29.806350 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-03-26 16:54:29.806362 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-03-26 16:54:29.806374 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-03-26 16:54:29.806387 | orchestrator | 2025-03-26 16:54:29.806399 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-03-26 16:54:29.806412 | orchestrator | Wednesday 26 March 2025 16:53:32 +0000 (0:00:03.757) 0:01:53.967 ******* 2025-03-26 16:54:29.806424 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-03-26 16:54:29.806479 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-03-26 16:54:29.806492 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-03-26 16:54:29.806505 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-03-26 16:54:29.806517 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-03-26 16:54:29.806529 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-03-26 16:54:29.806542 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-03-26 16:54:29.806554 | orchestrator | 2025-03-26 16:54:29.806566 | orchestrator | TASK [common : Check common containers] **************************************** 2025-03-26 16:54:29.806578 | orchestrator | Wednesday 26 March 2025 16:53:35 +0000 (0:00:03.001) 0:01:56.968 ******* 2025-03-26 16:54:29.806592 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-26 16:54:29.806605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-26 16:54:29.806625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-26 16:54:29.806645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-26 16:54:29.806658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.806670 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.806683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.806696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.806725 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-26 16:54:29.806751 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-26 16:54:29.806765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.806778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.806791 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-03-26 16:54:29.806804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.806816 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.806829 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.806842 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.806870 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.806884 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.806897 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.806909 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:54:29.806922 | orchestrator | 2025-03-26 16:54:29.806935 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-03-26 16:54:29.806947 | orchestrator | Wednesday 26 March 2025 16:53:40 +0000 (0:00:04.832) 0:02:01.801 ******* 2025-03-26 16:54:29.806960 | orchestrator | changed: [testbed-manager] 2025-03-26 16:54:29.806972 | orchestrator | changed: [testbed-node-0] 2025-03-26 16:54:29.806984 | orchestrator | changed: [testbed-node-1] 2025-03-26 16:54:29.806997 | orchestrator | changed: [testbed-node-2] 2025-03-26 16:54:29.807009 | orchestrator | changed: [testbed-node-3] 2025-03-26 16:54:29.807021 | orchestrator | changed: [testbed-node-4] 2025-03-26 16:54:29.807033 | orchestrator | changed: [testbed-node-5] 2025-03-26 16:54:29.807046 | orchestrator | 2025-03-26 16:54:29.807062 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-03-26 16:54:29.807075 | orchestrator | Wednesday 26 March 2025 16:53:42 +0000 (0:00:02.108) 0:02:03.910 ******* 2025-03-26 16:54:29.807088 | orchestrator | changed: [testbed-manager] 2025-03-26 16:54:29.807104 | orchestrator | changed: [testbed-node-0] 2025-03-26 16:54:29.807116 | orchestrator | changed: [testbed-node-1] 2025-03-26 16:54:29.807129 | orchestrator | changed: [testbed-node-2] 2025-03-26 16:54:29.807141 | orchestrator | changed: [testbed-node-3] 2025-03-26 16:54:29.807153 | orchestrator | changed: [testbed-node-4] 2025-03-26 16:54:29.807165 | orchestrator | changed: [testbed-node-5] 2025-03-26 16:54:29.807178 | orchestrator | 2025-03-26 16:54:29.807190 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-03-26 16:54:29.807202 | orchestrator | Wednesday 26 March 2025 16:53:44 +0000 (0:00:01.760) 0:02:05.670 ******* 2025-03-26 16:54:29.807214 | orchestrator | 2025-03-26 16:54:29.807226 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-03-26 16:54:29.807238 | orchestrator | Wednesday 26 March 2025 16:53:44 +0000 (0:00:00.070) 0:02:05.740 ******* 2025-03-26 16:54:29.807256 | orchestrator | 2025-03-26 16:54:29.807269 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-03-26 16:54:29.807281 | orchestrator | Wednesday 26 March 2025 16:53:44 +0000 (0:00:00.059) 0:02:05.800 ******* 2025-03-26 16:54:29.807293 | orchestrator | 2025-03-26 16:54:29.807305 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-03-26 16:54:29.807317 | orchestrator | Wednesday 26 March 2025 16:53:44 +0000 (0:00:00.057) 0:02:05.857 ******* 2025-03-26 16:54:29.807329 | orchestrator | 2025-03-26 16:54:29.807342 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-03-26 16:54:29.807354 | orchestrator | Wednesday 26 March 2025 16:53:44 +0000 (0:00:00.264) 0:02:06.122 ******* 2025-03-26 16:54:29.807366 | orchestrator | 2025-03-26 16:54:29.807378 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-03-26 16:54:29.807390 | orchestrator | Wednesday 26 March 2025 16:53:45 +0000 (0:00:00.092) 0:02:06.215 ******* 2025-03-26 16:54:29.807402 | orchestrator | 2025-03-26 16:54:29.807415 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-03-26 16:54:29.807427 | orchestrator | Wednesday 26 March 2025 16:53:45 +0000 (0:00:00.103) 0:02:06.318 ******* 2025-03-26 16:54:29.807439 | orchestrator | 2025-03-26 16:54:29.807451 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-03-26 16:54:29.807463 | orchestrator | Wednesday 26 March 2025 16:53:45 +0000 (0:00:00.124) 0:02:06.442 ******* 2025-03-26 16:54:29.807475 | orchestrator | changed: [testbed-node-0] 2025-03-26 16:54:29.807493 | orchestrator | changed: [testbed-node-1] 2025-03-26 16:54:29.807506 | orchestrator | changed: [testbed-node-2] 2025-03-26 16:54:29.807518 | orchestrator | changed: [testbed-node-3] 2025-03-26 16:54:29.807531 | orchestrator | changed: [testbed-manager] 2025-03-26 16:54:29.807543 | orchestrator | changed: [testbed-node-5] 2025-03-26 16:54:29.807555 | orchestrator | changed: [testbed-node-4] 2025-03-26 16:54:29.807567 | orchestrator | 2025-03-26 16:54:29.807579 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-03-26 16:54:29.807592 | orchestrator | Wednesday 26 March 2025 16:53:51 +0000 (0:00:06.348) 0:02:12.791 ******* 2025-03-26 16:54:29.807604 | orchestrator | changed: [testbed-node-0] 2025-03-26 16:54:29.807616 | orchestrator | changed: [testbed-node-1] 2025-03-26 16:54:29.807628 | orchestrator | changed: [testbed-node-3] 2025-03-26 16:54:29.807640 | orchestrator | changed: [testbed-node-2] 2025-03-26 16:54:29.807652 | orchestrator | changed: [testbed-node-5] 2025-03-26 16:54:29.807664 | orchestrator | changed: [testbed-node-4] 2025-03-26 16:54:29.807677 | orchestrator | changed: [testbed-manager] 2025-03-26 16:54:29.807689 | orchestrator | 2025-03-26 16:54:29.807701 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-03-26 16:54:29.807729 | orchestrator | Wednesday 26 March 2025 16:54:13 +0000 (0:00:21.519) 0:02:34.310 ******* 2025-03-26 16:54:29.807742 | orchestrator | ok: [testbed-node-1] 2025-03-26 16:54:29.807755 | orchestrator | ok: [testbed-node-0] 2025-03-26 16:54:29.807767 | orchestrator | ok: [testbed-node-2] 2025-03-26 16:54:29.807779 | orchestrator | ok: [testbed-node-3] 2025-03-26 16:54:29.807791 | orchestrator | ok: [testbed-manager] 2025-03-26 16:54:29.807804 | orchestrator | ok: [testbed-node-4] 2025-03-26 16:54:29.807816 | orchestrator | ok: [testbed-node-5] 2025-03-26 16:54:29.807828 | orchestrator | 2025-03-26 16:54:29.807840 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-03-26 16:54:29.807852 | orchestrator | Wednesday 26 March 2025 16:54:16 +0000 (0:00:02.993) 0:02:37.303 ******* 2025-03-26 16:54:29.807865 | orchestrator | changed: [testbed-node-2] 2025-03-26 16:54:29.807877 | orchestrator | changed: [testbed-node-1] 2025-03-26 16:54:29.807889 | orchestrator | changed: [testbed-node-3] 2025-03-26 16:54:29.807901 | orchestrator | changed: [testbed-node-4] 2025-03-26 16:54:29.807913 | orchestrator | changed: [testbed-manager] 2025-03-26 16:54:29.807925 | orchestrator | changed: [testbed-node-0] 2025-03-26 16:54:29.807943 | orchestrator | changed: [testbed-node-5] 2025-03-26 16:54:29.807956 | orchestrator | 2025-03-26 16:54:29.807968 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-26 16:54:29.807981 | orchestrator | testbed-manager : ok=25  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-26 16:54:29.807994 | orchestrator | testbed-node-0 : ok=21  changed=14  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-26 16:54:29.808006 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-26 16:54:29.808019 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-26 16:54:29.808031 | orchestrator | testbed-node-3 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-26 16:54:29.808043 | orchestrator | testbed-node-4 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-26 16:54:29.808056 | orchestrator | testbed-node-5 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-03-26 16:54:29.808068 | orchestrator | 2025-03-26 16:54:29.808080 | orchestrator | 2025-03-26 16:54:29.808092 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-26 16:54:29.808104 | orchestrator | Wednesday 26 March 2025 16:54:27 +0000 (0:00:11.606) 0:02:48.909 ******* 2025-03-26 16:54:29.808117 | orchestrator | =============================================================================== 2025-03-26 16:54:29.808129 | orchestrator | common : Ensure fluentd image is present for label check --------------- 30.67s 2025-03-26 16:54:29.808141 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 21.52s 2025-03-26 16:54:29.808153 | orchestrator | common : Restart cron container ---------------------------------------- 11.61s 2025-03-26 16:54:29.808166 | orchestrator | common : Copying over config.json files for services ------------------- 10.71s 2025-03-26 16:54:29.808182 | orchestrator | common : Copying over td-agent.conf ------------------------------------- 8.60s 2025-03-26 16:54:29.808195 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 8.28s 2025-03-26 16:54:29.808207 | orchestrator | common : Restart fluentd container -------------------------------------- 6.35s 2025-03-26 16:54:29.808219 | orchestrator | common : Ensuring config directories exist ------------------------------ 5.52s 2025-03-26 16:54:29.808231 | orchestrator | common : Copying over cron logrotate config file ------------------------ 5.11s 2025-03-26 16:54:29.808243 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 5.05s 2025-03-26 16:54:29.808255 | orchestrator | common : Fetch fluentd Docker image labels ------------------------------ 4.98s 2025-03-26 16:54:29.808268 | orchestrator | common : Check common containers ---------------------------------------- 4.83s 2025-03-26 16:54:29.808280 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 4.46s 2025-03-26 16:54:29.808292 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.76s 2025-03-26 16:54:29.808310 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 3.36s 2025-03-26 16:54:32.849814 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.24s 2025-03-26 16:54:32.849938 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.00s 2025-03-26 16:54:32.849958 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.99s 2025-03-26 16:54:32.849974 | orchestrator | common : include_tasks -------------------------------------------------- 2.92s 2025-03-26 16:54:32.849989 | orchestrator | common : Set fluentd facts ---------------------------------------------- 2.14s 2025-03-26 16:54:32.850087 | orchestrator | 2025-03-26 16:54:29 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:54:32.850121 | orchestrator | 2025-03-26 16:54:32 | INFO  | Task e17ff8bb-5e1c-4d81-ac26-82f6c30fcfb9 is in state STARTED 2025-03-26 16:54:32.850604 | orchestrator | 2025-03-26 16:54:32 | INFO  | Task a48a6945-eabe-4f83-93f3-0c18da27b15c is in state STARTED 2025-03-26 16:54:32.853228 | orchestrator | 2025-03-26 16:54:32 | INFO  | Task 8d329125-911d-4a50-a2ce-8dd9e2334605 is in state STARTED 2025-03-26 16:54:32.854084 | orchestrator | 2025-03-26 16:54:32 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:54:32.854876 | orchestrator | 2025-03-26 16:54:32 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:54:32.861541 | orchestrator | 2025-03-26 16:54:32 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:54:35.907172 | orchestrator | 2025-03-26 16:54:32 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:54:35.907313 | orchestrator | 2025-03-26 16:54:35 | INFO  | Task e17ff8bb-5e1c-4d81-ac26-82f6c30fcfb9 is in state STARTED 2025-03-26 16:54:35.907929 | orchestrator | 2025-03-26 16:54:35 | INFO  | Task a48a6945-eabe-4f83-93f3-0c18da27b15c is in state STARTED 2025-03-26 16:54:35.907960 | orchestrator | 2025-03-26 16:54:35 | INFO  | Task 8d329125-911d-4a50-a2ce-8dd9e2334605 is in state STARTED 2025-03-26 16:54:35.908656 | orchestrator | 2025-03-26 16:54:35 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:54:35.909349 | orchestrator | 2025-03-26 16:54:35 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:54:35.909962 | orchestrator | 2025-03-26 16:54:35 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:54:38.966308 | orchestrator | 2025-03-26 16:54:35 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:54:38.966461 | orchestrator | 2025-03-26 16:54:38 | INFO  | Task e17ff8bb-5e1c-4d81-ac26-82f6c30fcfb9 is in state STARTED 2025-03-26 16:54:38.966974 | orchestrator | 2025-03-26 16:54:38 | INFO  | Task a48a6945-eabe-4f83-93f3-0c18da27b15c is in state STARTED 2025-03-26 16:54:38.967643 | orchestrator | 2025-03-26 16:54:38 | INFO  | Task 8d329125-911d-4a50-a2ce-8dd9e2334605 is in state STARTED 2025-03-26 16:54:38.968579 | orchestrator | 2025-03-26 16:54:38 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:54:38.969341 | orchestrator | 2025-03-26 16:54:38 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:54:38.970475 | orchestrator | 2025-03-26 16:54:38 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:54:42.013063 | orchestrator | 2025-03-26 16:54:38 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:54:42.013207 | orchestrator | 2025-03-26 16:54:42 | INFO  | Task e17ff8bb-5e1c-4d81-ac26-82f6c30fcfb9 is in state STARTED 2025-03-26 16:54:42.014866 | orchestrator | 2025-03-26 16:54:42 | INFO  | Task a48a6945-eabe-4f83-93f3-0c18da27b15c is in state STARTED 2025-03-26 16:54:42.017127 | orchestrator | 2025-03-26 16:54:42 | INFO  | Task 8d329125-911d-4a50-a2ce-8dd9e2334605 is in state STARTED 2025-03-26 16:54:42.018266 | orchestrator | 2025-03-26 16:54:42 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:54:42.019948 | orchestrator | 2025-03-26 16:54:42 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:54:42.021998 | orchestrator | 2025-03-26 16:54:42 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:54:42.023970 | orchestrator | 2025-03-26 16:54:42 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:54:45.076040 | orchestrator | 2025-03-26 16:54:45 | INFO  | Task e17ff8bb-5e1c-4d81-ac26-82f6c30fcfb9 is in state STARTED 2025-03-26 16:54:45.081799 | orchestrator | 2025-03-26 16:54:45 | INFO  | Task a48a6945-eabe-4f83-93f3-0c18da27b15c is in state STARTED 2025-03-26 16:54:45.085941 | orchestrator | 2025-03-26 16:54:45 | INFO  | Task 8d329125-911d-4a50-a2ce-8dd9e2334605 is in state STARTED 2025-03-26 16:54:45.093006 | orchestrator | 2025-03-26 16:54:45 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:54:45.105826 | orchestrator | 2025-03-26 16:54:45 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:54:48.150714 | orchestrator | 2025-03-26 16:54:45 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:54:48.150875 | orchestrator | 2025-03-26 16:54:45 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:54:48.150913 | orchestrator | 2025-03-26 16:54:48 | INFO  | Task e17ff8bb-5e1c-4d81-ac26-82f6c30fcfb9 is in state STARTED 2025-03-26 16:54:48.151086 | orchestrator | 2025-03-26 16:54:48 | INFO  | Task a48a6945-eabe-4f83-93f3-0c18da27b15c is in state STARTED 2025-03-26 16:54:48.152135 | orchestrator | 2025-03-26 16:54:48 | INFO  | Task 8d329125-911d-4a50-a2ce-8dd9e2334605 is in state STARTED 2025-03-26 16:54:48.153048 | orchestrator | 2025-03-26 16:54:48 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:54:48.153966 | orchestrator | 2025-03-26 16:54:48 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:54:48.155094 | orchestrator | 2025-03-26 16:54:48 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:54:51.204170 | orchestrator | 2025-03-26 16:54:48 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:54:51.204280 | orchestrator | 2025-03-26 16:54:51 | INFO  | Task e17ff8bb-5e1c-4d81-ac26-82f6c30fcfb9 is in state STARTED 2025-03-26 16:54:51.208796 | orchestrator | 2025-03-26 16:54:51 | INFO  | Task a48a6945-eabe-4f83-93f3-0c18da27b15c is in state STARTED 2025-03-26 16:54:51.212484 | orchestrator | 2025-03-26 16:54:51 | INFO  | Task 8d329125-911d-4a50-a2ce-8dd9e2334605 is in state STARTED 2025-03-26 16:54:51.222659 | orchestrator | 2025-03-26 16:54:51 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:54:51.232954 | orchestrator | 2025-03-26 16:54:51 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:54:51.239704 | orchestrator | 2025-03-26 16:54:51 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:54:51.246924 | orchestrator | 2025-03-26 16:54:51 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:54:54.305149 | orchestrator | 2025-03-26 16:54:54 | INFO  | Task e17ff8bb-5e1c-4d81-ac26-82f6c30fcfb9 is in state STARTED 2025-03-26 16:54:54.306581 | orchestrator | 2025-03-26 16:54:54 | INFO  | Task a48a6945-eabe-4f83-93f3-0c18da27b15c is in state STARTED 2025-03-26 16:54:54.306634 | orchestrator | 2025-03-26 16:54:54 | INFO  | Task 8d329125-911d-4a50-a2ce-8dd9e2334605 is in state STARTED 2025-03-26 16:54:54.310791 | orchestrator | 2025-03-26 16:54:54 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:54:54.311714 | orchestrator | 2025-03-26 16:54:54 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:54:54.313019 | orchestrator | 2025-03-26 16:54:54 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:54:54.313245 | orchestrator | 2025-03-26 16:54:54 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:54:57.377350 | orchestrator | 2025-03-26 16:54:57 | INFO  | Task e17ff8bb-5e1c-4d81-ac26-82f6c30fcfb9 is in state STARTED 2025-03-26 16:54:57.379432 | orchestrator | 2025-03-26 16:54:57 | INFO  | Task a48a6945-eabe-4f83-93f3-0c18da27b15c is in state STARTED 2025-03-26 16:54:57.381449 | orchestrator | 2025-03-26 16:54:57 | INFO  | Task 8d329125-911d-4a50-a2ce-8dd9e2334605 is in state STARTED 2025-03-26 16:54:57.386576 | orchestrator | 2025-03-26 16:54:57 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:54:57.389033 | orchestrator | 2025-03-26 16:54:57 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:54:57.390006 | orchestrator | 2025-03-26 16:54:57 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:55:00.434261 | orchestrator | 2025-03-26 16:54:57 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:55:00.434406 | orchestrator | 2025-03-26 16:55:00 | INFO  | Task e17ff8bb-5e1c-4d81-ac26-82f6c30fcfb9 is in state SUCCESS 2025-03-26 16:55:00.435614 | orchestrator | 2025-03-26 16:55:00 | INFO  | Task a48a6945-eabe-4f83-93f3-0c18da27b15c is in state STARTED 2025-03-26 16:55:00.440779 | orchestrator | 2025-03-26 16:55:00 | INFO  | Task 8d329125-911d-4a50-a2ce-8dd9e2334605 is in state STARTED 2025-03-26 16:55:00.444436 | orchestrator | 2025-03-26 16:55:00 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:55:00.445086 | orchestrator | 2025-03-26 16:55:00 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:55:00.454061 | orchestrator | 2025-03-26 16:55:00 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:55:00.456401 | orchestrator | 2025-03-26 16:55:00 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:55:03.527958 | orchestrator | 2025-03-26 16:55:00 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:55:03.528096 | orchestrator | 2025-03-26 16:55:03 | INFO  | Task a48a6945-eabe-4f83-93f3-0c18da27b15c is in state STARTED 2025-03-26 16:55:03.531308 | orchestrator | 2025-03-26 16:55:03 | INFO  | Task 8d329125-911d-4a50-a2ce-8dd9e2334605 is in state STARTED 2025-03-26 16:55:03.532804 | orchestrator | 2025-03-26 16:55:03 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:55:03.533804 | orchestrator | 2025-03-26 16:55:03 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:55:03.534805 | orchestrator | 2025-03-26 16:55:03 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:55:03.535799 | orchestrator | 2025-03-26 16:55:03 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:55:03.535914 | orchestrator | 2025-03-26 16:55:03 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:55:06.639061 | orchestrator | 2025-03-26 16:55:06 | INFO  | Task a48a6945-eabe-4f83-93f3-0c18da27b15c is in state STARTED 2025-03-26 16:55:06.643966 | orchestrator | 2025-03-26 16:55:06 | INFO  | Task 8d329125-911d-4a50-a2ce-8dd9e2334605 is in state STARTED 2025-03-26 16:55:06.644953 | orchestrator | 2025-03-26 16:55:06 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:55:06.644987 | orchestrator | 2025-03-26 16:55:06 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:55:06.646146 | orchestrator | 2025-03-26 16:55:06 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:55:06.649420 | orchestrator | 2025-03-26 16:55:06 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:55:06.649566 | orchestrator | 2025-03-26 16:55:06 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:55:09.708307 | orchestrator | 2025-03-26 16:55:09 | INFO  | Task a48a6945-eabe-4f83-93f3-0c18da27b15c is in state STARTED 2025-03-26 16:55:09.717126 | orchestrator | 2025-03-26 16:55:09 | INFO  | Task 8d329125-911d-4a50-a2ce-8dd9e2334605 is in state STARTED 2025-03-26 16:55:09.722003 | orchestrator | 2025-03-26 16:55:09 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:55:09.724301 | orchestrator | 2025-03-26 16:55:09 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:55:09.729742 | orchestrator | 2025-03-26 16:55:09 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:55:09.730672 | orchestrator | 2025-03-26 16:55:09 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:55:09.730877 | orchestrator | 2025-03-26 16:55:09 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:55:12.788839 | orchestrator | 2025-03-26 16:55:12 | INFO  | Task a48a6945-eabe-4f83-93f3-0c18da27b15c is in state STARTED 2025-03-26 16:55:12.790710 | orchestrator | 2025-03-26 16:55:12 | INFO  | Task 8d329125-911d-4a50-a2ce-8dd9e2334605 is in state STARTED 2025-03-26 16:55:12.793393 | orchestrator | 2025-03-26 16:55:12 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:55:12.794475 | orchestrator | 2025-03-26 16:55:12 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:55:12.796930 | orchestrator | 2025-03-26 16:55:12 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:55:12.797792 | orchestrator | 2025-03-26 16:55:12 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:55:15.859507 | orchestrator | 2025-03-26 16:55:12 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:55:15.859643 | orchestrator | 2025-03-26 16:55:15 | INFO  | Task a48a6945-eabe-4f83-93f3-0c18da27b15c is in state STARTED 2025-03-26 16:55:15.864140 | orchestrator | 2025-03-26 16:55:15 | INFO  | Task 8d329125-911d-4a50-a2ce-8dd9e2334605 is in state STARTED 2025-03-26 16:55:15.880514 | orchestrator | 2025-03-26 16:55:15 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:55:15.889917 | orchestrator | 2025-03-26 16:55:15 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:55:15.896612 | orchestrator | 2025-03-26 16:55:15 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:55:15.896647 | orchestrator | 2025-03-26 16:55:15 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:55:18.956577 | orchestrator | 2025-03-26 16:55:15 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:55:18.956718 | orchestrator | 2025-03-26 16:55:18 | INFO  | Task a48a6945-eabe-4f83-93f3-0c18da27b15c is in state STARTED 2025-03-26 16:55:18.956970 | orchestrator | 2025-03-26 16:55:18 | INFO  | Task 8d329125-911d-4a50-a2ce-8dd9e2334605 is in state STARTED 2025-03-26 16:55:18.961828 | orchestrator | 2025-03-26 16:55:18 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:55:18.963004 | orchestrator | 2025-03-26 16:55:18 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:55:18.964516 | orchestrator | 2025-03-26 16:55:18 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:55:18.968545 | orchestrator | 2025-03-26 16:55:18 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:55:18.968881 | orchestrator | 2025-03-26 16:55:18 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:55:22.067632 | orchestrator | 2025-03-26 16:55:22 | INFO  | Task a48a6945-eabe-4f83-93f3-0c18da27b15c is in state STARTED 2025-03-26 16:55:22.070675 | orchestrator | 2025-03-26 16:55:22 | INFO  | Task 8d329125-911d-4a50-a2ce-8dd9e2334605 is in state STARTED 2025-03-26 16:55:22.075502 | orchestrator | 2025-03-26 16:55:22 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:55:22.076621 | orchestrator | 2025-03-26 16:55:22 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:55:22.081989 | orchestrator | 2025-03-26 16:55:22 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:55:22.086934 | orchestrator | 2025-03-26 16:55:22 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:55:25.134377 | orchestrator | 2025-03-26 16:55:22 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:55:25.134505 | orchestrator | 2025-03-26 16:55:25 | INFO  | Task a48a6945-eabe-4f83-93f3-0c18da27b15c is in state SUCCESS 2025-03-26 16:55:25.135083 | orchestrator | 2025-03-26 16:55:25.135113 | orchestrator | 2025-03-26 16:55:25.135127 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-26 16:55:25.135142 | orchestrator | 2025-03-26 16:55:25.135156 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-26 16:55:25.135170 | orchestrator | Wednesday 26 March 2025 16:54:34 +0000 (0:00:00.827) 0:00:00.827 ******* 2025-03-26 16:55:25.135184 | orchestrator | ok: [testbed-node-0] 2025-03-26 16:55:25.135200 | orchestrator | ok: [testbed-node-1] 2025-03-26 16:55:25.135214 | orchestrator | ok: [testbed-node-2] 2025-03-26 16:55:25.135228 | orchestrator | 2025-03-26 16:55:25.135242 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-26 16:55:25.135256 | orchestrator | Wednesday 26 March 2025 16:54:34 +0000 (0:00:00.856) 0:00:01.684 ******* 2025-03-26 16:55:25.135271 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-03-26 16:55:25.135285 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-03-26 16:55:25.135299 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-03-26 16:55:25.135312 | orchestrator | 2025-03-26 16:55:25.135326 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-03-26 16:55:25.135340 | orchestrator | 2025-03-26 16:55:25.135354 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-03-26 16:55:25.135368 | orchestrator | Wednesday 26 March 2025 16:54:35 +0000 (0:00:01.080) 0:00:02.764 ******* 2025-03-26 16:55:25.135383 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 16:55:25.135397 | orchestrator | 2025-03-26 16:55:25.135410 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-03-26 16:55:25.135424 | orchestrator | Wednesday 26 March 2025 16:54:37 +0000 (0:00:01.790) 0:00:04.555 ******* 2025-03-26 16:55:25.135438 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-03-26 16:55:25.135452 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-03-26 16:55:25.135465 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-03-26 16:55:25.135479 | orchestrator | 2025-03-26 16:55:25.135494 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-03-26 16:55:25.135508 | orchestrator | Wednesday 26 March 2025 16:54:39 +0000 (0:00:01.849) 0:00:06.404 ******* 2025-03-26 16:55:25.135521 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-03-26 16:55:25.135535 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-03-26 16:55:25.135549 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-03-26 16:55:25.135563 | orchestrator | 2025-03-26 16:55:25.135602 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-03-26 16:55:25.135617 | orchestrator | Wednesday 26 March 2025 16:54:43 +0000 (0:00:03.707) 0:00:10.112 ******* 2025-03-26 16:55:25.135631 | orchestrator | changed: [testbed-node-0] 2025-03-26 16:55:25.135660 | orchestrator | changed: [testbed-node-1] 2025-03-26 16:55:25.135675 | orchestrator | changed: [testbed-node-2] 2025-03-26 16:55:25.135691 | orchestrator | 2025-03-26 16:55:25.135706 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-03-26 16:55:25.135722 | orchestrator | Wednesday 26 March 2025 16:54:48 +0000 (0:00:05.133) 0:00:15.245 ******* 2025-03-26 16:55:25.135737 | orchestrator | changed: [testbed-node-0] 2025-03-26 16:55:25.135752 | orchestrator | changed: [testbed-node-1] 2025-03-26 16:55:25.135794 | orchestrator | changed: [testbed-node-2] 2025-03-26 16:55:25.135811 | orchestrator | 2025-03-26 16:55:25.135831 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-26 16:55:25.135847 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-26 16:55:25.135865 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-26 16:55:25.135880 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-26 16:55:25.135895 | orchestrator | 2025-03-26 16:55:25.135909 | orchestrator | 2025-03-26 16:55:25.135924 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-26 16:55:25.135939 | orchestrator | Wednesday 26 March 2025 16:54:56 +0000 (0:00:08.418) 0:00:23.664 ******* 2025-03-26 16:55:25.135954 | orchestrator | =============================================================================== 2025-03-26 16:55:25.135969 | orchestrator | memcached : Restart memcached container --------------------------------- 8.42s 2025-03-26 16:55:25.135984 | orchestrator | memcached : Check memcached container ----------------------------------- 5.13s 2025-03-26 16:55:25.135999 | orchestrator | memcached : Copying over config.json files for services ----------------- 3.71s 2025-03-26 16:55:25.136013 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.85s 2025-03-26 16:55:25.136028 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.79s 2025-03-26 16:55:25.136042 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.08s 2025-03-26 16:55:25.136056 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.86s 2025-03-26 16:55:25.136070 | orchestrator | 2025-03-26 16:55:25.136083 | orchestrator | 2025-03-26 16:55:25.136097 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-26 16:55:25.136111 | orchestrator | 2025-03-26 16:55:25.136124 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-26 16:55:25.136138 | orchestrator | Wednesday 26 March 2025 16:54:35 +0000 (0:00:01.330) 0:00:01.330 ******* 2025-03-26 16:55:25.136151 | orchestrator | ok: [testbed-node-0] 2025-03-26 16:55:25.136165 | orchestrator | ok: [testbed-node-1] 2025-03-26 16:55:25.136179 | orchestrator | ok: [testbed-node-2] 2025-03-26 16:55:25.136193 | orchestrator | 2025-03-26 16:55:25.136207 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-26 16:55:25.136232 | orchestrator | Wednesday 26 March 2025 16:54:36 +0000 (0:00:00.903) 0:00:02.234 ******* 2025-03-26 16:55:25.136247 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-03-26 16:55:25.136261 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-03-26 16:55:25.136274 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-03-26 16:55:25.136288 | orchestrator | 2025-03-26 16:55:25.136302 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-03-26 16:55:25.136315 | orchestrator | 2025-03-26 16:55:25.136329 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-03-26 16:55:25.136352 | orchestrator | Wednesday 26 March 2025 16:54:37 +0000 (0:00:00.719) 0:00:02.954 ******* 2025-03-26 16:55:25.136366 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 16:55:25.136380 | orchestrator | 2025-03-26 16:55:25.136394 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-03-26 16:55:25.136408 | orchestrator | Wednesday 26 March 2025 16:54:38 +0000 (0:00:01.283) 0:00:04.237 ******* 2025-03-26 16:55:25.136424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-26 16:55:25.136444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-26 16:55:25.136459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-26 16:55:25.136474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-26 16:55:25.136489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-26 16:55:25.136518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-26 16:55:25.136541 | orchestrator | 2025-03-26 16:55:25.136556 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-03-26 16:55:25.136570 | orchestrator | Wednesday 26 March 2025 16:54:41 +0000 (0:00:02.817) 0:00:07.054 ******* 2025-03-26 16:55:25.136584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-26 16:55:25.136599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-26 16:55:25.136614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-26 16:55:25.136628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-26 16:55:25.136643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-26 16:55:25.136666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-26 16:55:25.136687 | orchestrator | 2025-03-26 16:55:25.136702 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-03-26 16:55:25.136716 | orchestrator | Wednesday 26 March 2025 16:54:45 +0000 (0:00:04.452) 0:00:11.507 ******* 2025-03-26 16:55:25.136730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-26 16:55:25.136744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-26 16:55:25.136758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-26 16:55:25.136791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-26 16:55:25.136807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-26 16:55:25.136829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-26 16:55:25.136851 | orchestrator | 2025-03-26 16:55:25.136865 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-03-26 16:55:25.136879 | orchestrator | Wednesday 26 March 2025 16:54:51 +0000 (0:00:05.511) 0:00:17.018 ******* 2025-03-26 16:55:25.136897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-26 16:55:25.136922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-26 16:55:25.136945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-03-26 16:55:25.136970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-26 16:55:25.136996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-26 16:55:25.137036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-03-26 16:55:25.145683 | orchestrator | 2025-03-26 16:55:25.145827 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-03-26 16:55:25.145848 | orchestrator | Wednesday 26 March 2025 16:54:54 +0000 (0:00:03.278) 0:00:20.296 ******* 2025-03-26 16:55:25.145862 | orchestrator | 2025-03-26 16:55:25.145877 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-03-26 16:55:25.145891 | orchestrator | Wednesday 26 March 2025 16:54:54 +0000 (0:00:00.223) 0:00:20.520 ******* 2025-03-26 16:55:25.145905 | orchestrator | 2025-03-26 16:55:25.145919 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-03-26 16:55:25.145933 | orchestrator | Wednesday 26 March 2025 16:54:55 +0000 (0:00:00.375) 0:00:20.895 ******* 2025-03-26 16:55:25.145946 | orchestrator | 2025-03-26 16:55:25.145960 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-03-26 16:55:25.145974 | orchestrator | Wednesday 26 March 2025 16:54:55 +0000 (0:00:00.561) 0:00:21.457 ******* 2025-03-26 16:55:25.145988 | orchestrator | changed: [testbed-node-2] 2025-03-26 16:55:25.146003 | orchestrator | changed: [testbed-node-0] 2025-03-26 16:55:25.146063 | orchestrator | changed: [testbed-node-1] 2025-03-26 16:55:25.146079 | orchestrator | 2025-03-26 16:55:25.146093 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-03-26 16:55:25.146107 | orchestrator | Wednesday 26 March 2025 16:55:04 +0000 (0:00:08.894) 0:00:30.352 ******* 2025-03-26 16:55:25.146121 | orchestrator | changed: [testbed-node-0] 2025-03-26 16:55:25.146135 | orchestrator | changed: [testbed-node-1] 2025-03-26 16:55:25.146191 | orchestrator | changed: [testbed-node-2] 2025-03-26 16:55:25.146206 | orchestrator | 2025-03-26 16:55:25.146221 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-26 16:55:25.146236 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-26 16:55:25.146253 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-26 16:55:25.146269 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-26 16:55:25.146285 | orchestrator | 2025-03-26 16:55:25.146300 | orchestrator | 2025-03-26 16:55:25.146316 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-26 16:55:25.146331 | orchestrator | Wednesday 26 March 2025 16:55:21 +0000 (0:00:16.884) 0:00:47.237 ******* 2025-03-26 16:55:25.146346 | orchestrator | =============================================================================== 2025-03-26 16:55:25.146361 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 16.88s 2025-03-26 16:55:25.146376 | orchestrator | redis : Restart redis container ----------------------------------------- 8.89s 2025-03-26 16:55:25.146391 | orchestrator | redis : Copying over redis config files --------------------------------- 5.51s 2025-03-26 16:55:25.146407 | orchestrator | redis : Copying over default config.json files -------------------------- 4.45s 2025-03-26 16:55:25.146421 | orchestrator | redis : Check redis containers ------------------------------------------ 3.28s 2025-03-26 16:55:25.146460 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.82s 2025-03-26 16:55:25.146476 | orchestrator | redis : include_tasks --------------------------------------------------- 1.28s 2025-03-26 16:55:25.146491 | orchestrator | redis : Flush handlers -------------------------------------------------- 1.16s 2025-03-26 16:55:25.146507 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.90s 2025-03-26 16:55:25.146522 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.72s 2025-03-26 16:55:25.146542 | orchestrator | 2025-03-26 16:55:25 | INFO  | Task 8d329125-911d-4a50-a2ce-8dd9e2334605 is in state STARTED 2025-03-26 16:55:25.146574 | orchestrator | 2025-03-26 16:55:25 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:55:25.148224 | orchestrator | 2025-03-26 16:55:25 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:55:25.154901 | orchestrator | 2025-03-26 16:55:25 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:55:25.159361 | orchestrator | 2025-03-26 16:55:25 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:55:28.219050 | orchestrator | 2025-03-26 16:55:25 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:55:28.219158 | orchestrator | 2025-03-26 16:55:28 | INFO  | Task 8d329125-911d-4a50-a2ce-8dd9e2334605 is in state STARTED 2025-03-26 16:55:28.219884 | orchestrator | 2025-03-26 16:55:28 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:55:28.219920 | orchestrator | 2025-03-26 16:55:28 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:55:28.221019 | orchestrator | 2025-03-26 16:55:28 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:55:28.222604 | orchestrator | 2025-03-26 16:55:28 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:55:31.261130 | orchestrator | 2025-03-26 16:55:28 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:55:31.261263 | orchestrator | 2025-03-26 16:55:31 | INFO  | Task 8d329125-911d-4a50-a2ce-8dd9e2334605 is in state STARTED 2025-03-26 16:55:31.264873 | orchestrator | 2025-03-26 16:55:31 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:55:31.267615 | orchestrator | 2025-03-26 16:55:31 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:55:31.270182 | orchestrator | 2025-03-26 16:55:31 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:55:31.272669 | orchestrator | 2025-03-26 16:55:31 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:55:31.273452 | orchestrator | 2025-03-26 16:55:31 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:55:34.315597 | orchestrator | 2025-03-26 16:55:34 | INFO  | Task 8d329125-911d-4a50-a2ce-8dd9e2334605 is in state STARTED 2025-03-26 16:55:34.316276 | orchestrator | 2025-03-26 16:55:34 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:55:34.317484 | orchestrator | 2025-03-26 16:55:34 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:55:34.318729 | orchestrator | 2025-03-26 16:55:34 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:55:34.319908 | orchestrator | 2025-03-26 16:55:34 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:55:37.384264 | orchestrator | 2025-03-26 16:55:34 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:55:37.384360 | orchestrator | 2025-03-26 16:55:37 | INFO  | Task 8d329125-911d-4a50-a2ce-8dd9e2334605 is in state STARTED 2025-03-26 16:55:37.384702 | orchestrator | 2025-03-26 16:55:37 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:55:37.385499 | orchestrator | 2025-03-26 16:55:37 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:55:37.386363 | orchestrator | 2025-03-26 16:55:37 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:55:37.387006 | orchestrator | 2025-03-26 16:55:37 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:55:40.464126 | orchestrator | 2025-03-26 16:55:37 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:55:40.464265 | orchestrator | 2025-03-26 16:55:40 | INFO  | Task 8d329125-911d-4a50-a2ce-8dd9e2334605 is in state STARTED 2025-03-26 16:55:40.467076 | orchestrator | 2025-03-26 16:55:40 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:55:40.467354 | orchestrator | 2025-03-26 16:55:40 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:55:40.467389 | orchestrator | 2025-03-26 16:55:40 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:55:40.473710 | orchestrator | 2025-03-26 16:55:40 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:55:43.529106 | orchestrator | 2025-03-26 16:55:40 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:55:43.529228 | orchestrator | 2025-03-26 16:55:43 | INFO  | Task 8d329125-911d-4a50-a2ce-8dd9e2334605 is in state STARTED 2025-03-26 16:55:43.529677 | orchestrator | 2025-03-26 16:55:43 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:55:43.531138 | orchestrator | 2025-03-26 16:55:43 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:55:43.535500 | orchestrator | 2025-03-26 16:55:43 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:55:43.537871 | orchestrator | 2025-03-26 16:55:43 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:55:46.595424 | orchestrator | 2025-03-26 16:55:43 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:55:46.595527 | orchestrator | 2025-03-26 16:55:46 | INFO  | Task 8d329125-911d-4a50-a2ce-8dd9e2334605 is in state STARTED 2025-03-26 16:55:46.597548 | orchestrator | 2025-03-26 16:55:46 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:55:46.600279 | orchestrator | 2025-03-26 16:55:46 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:55:46.605845 | orchestrator | 2025-03-26 16:55:46 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:55:46.607345 | orchestrator | 2025-03-26 16:55:46 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:55:49.646576 | orchestrator | 2025-03-26 16:55:46 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:55:49.646747 | orchestrator | 2025-03-26 16:55:49 | INFO  | Task 8d329125-911d-4a50-a2ce-8dd9e2334605 is in state STARTED 2025-03-26 16:55:49.646870 | orchestrator | 2025-03-26 16:55:49 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:55:49.651189 | orchestrator | 2025-03-26 16:55:49 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:55:49.653377 | orchestrator | 2025-03-26 16:55:49 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:55:49.654103 | orchestrator | 2025-03-26 16:55:49 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:55:52.737261 | orchestrator | 2025-03-26 16:55:49 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:55:52.737349 | orchestrator | 2025-03-26 16:55:52 | INFO  | Task 8d329125-911d-4a50-a2ce-8dd9e2334605 is in state STARTED 2025-03-26 16:55:52.737960 | orchestrator | 2025-03-26 16:55:52 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:55:52.737995 | orchestrator | 2025-03-26 16:55:52 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:55:52.739756 | orchestrator | 2025-03-26 16:55:52 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:55:52.741035 | orchestrator | 2025-03-26 16:55:52 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:55:55.787243 | orchestrator | 2025-03-26 16:55:52 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:55:55.787360 | orchestrator | 2025-03-26 16:55:55 | INFO  | Task 8d329125-911d-4a50-a2ce-8dd9e2334605 is in state STARTED 2025-03-26 16:55:55.788155 | orchestrator | 2025-03-26 16:55:55 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:55:55.800224 | orchestrator | 2025-03-26 16:55:55 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:55:55.802529 | orchestrator | 2025-03-26 16:55:55 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:55:55.802562 | orchestrator | 2025-03-26 16:55:55 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:55:58.862839 | orchestrator | 2025-03-26 16:55:55 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:55:58.862964 | orchestrator | 2025-03-26 16:55:58 | INFO  | Task 8d329125-911d-4a50-a2ce-8dd9e2334605 is in state STARTED 2025-03-26 16:55:58.864515 | orchestrator | 2025-03-26 16:55:58 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:55:58.866000 | orchestrator | 2025-03-26 16:55:58 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:55:58.867777 | orchestrator | 2025-03-26 16:55:58 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:55:58.868887 | orchestrator | 2025-03-26 16:55:58 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:55:58.869076 | orchestrator | 2025-03-26 16:55:58 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:56:01.912598 | orchestrator | 2025-03-26 16:56:01 | INFO  | Task 8d329125-911d-4a50-a2ce-8dd9e2334605 is in state STARTED 2025-03-26 16:56:01.914627 | orchestrator | 2025-03-26 16:56:01 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:56:01.914782 | orchestrator | 2025-03-26 16:56:01 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:56:01.916488 | orchestrator | 2025-03-26 16:56:01 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:56:01.917892 | orchestrator | 2025-03-26 16:56:01 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:56:04.965175 | orchestrator | 2025-03-26 16:56:01 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:56:04.965279 | orchestrator | 2025-03-26 16:56:04 | INFO  | Task 8d329125-911d-4a50-a2ce-8dd9e2334605 is in state STARTED 2025-03-26 16:56:04.965695 | orchestrator | 2025-03-26 16:56:04 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:56:04.966552 | orchestrator | 2025-03-26 16:56:04 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:56:04.967220 | orchestrator | 2025-03-26 16:56:04 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:56:04.968159 | orchestrator | 2025-03-26 16:56:04 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:56:08.021623 | orchestrator | 2025-03-26 16:56:04 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:56:08.021738 | orchestrator | 2025-03-26 16:56:08 | INFO  | Task 8d329125-911d-4a50-a2ce-8dd9e2334605 is in state STARTED 2025-03-26 16:56:08.022575 | orchestrator | 2025-03-26 16:56:08 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:56:08.027931 | orchestrator | 2025-03-26 16:56:08 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:56:08.028205 | orchestrator | 2025-03-26 16:56:08 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:56:08.028235 | orchestrator | 2025-03-26 16:56:08 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:56:11.070480 | orchestrator | 2025-03-26 16:56:08 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:56:11.070609 | orchestrator | 2025-03-26 16:56:11 | INFO  | Task 8d329125-911d-4a50-a2ce-8dd9e2334605 is in state STARTED 2025-03-26 16:56:11.070926 | orchestrator | 2025-03-26 16:56:11 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:56:11.071953 | orchestrator | 2025-03-26 16:56:11 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:56:11.073000 | orchestrator | 2025-03-26 16:56:11 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:56:11.074150 | orchestrator | 2025-03-26 16:56:11 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:56:14.123043 | orchestrator | 2025-03-26 16:56:11 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:56:14.123179 | orchestrator | 2025-03-26 16:56:14 | INFO  | Task 8d329125-911d-4a50-a2ce-8dd9e2334605 is in state STARTED 2025-03-26 16:56:14.128404 | orchestrator | 2025-03-26 16:56:14 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:56:14.128461 | orchestrator | 2025-03-26 16:56:14 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:56:14.129772 | orchestrator | 2025-03-26 16:56:14 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:56:14.133336 | orchestrator | 2025-03-26 16:56:14 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:56:17.189615 | orchestrator | 2025-03-26 16:56:14 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:56:17.189754 | orchestrator | 2025-03-26 16:56:17 | INFO  | Task 8d329125-911d-4a50-a2ce-8dd9e2334605 is in state STARTED 2025-03-26 16:56:17.191726 | orchestrator | 2025-03-26 16:56:17 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:56:17.198417 | orchestrator | 2025-03-26 16:56:17 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:56:17.200957 | orchestrator | 2025-03-26 16:56:17 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:56:17.203265 | orchestrator | 2025-03-26 16:56:17 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:56:20.245117 | orchestrator | 2025-03-26 16:56:17 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:56:20.245265 | orchestrator | 2025-03-26 16:56:20 | INFO  | Task 8d329125-911d-4a50-a2ce-8dd9e2334605 is in state SUCCESS 2025-03-26 16:56:20.246714 | orchestrator | 2025-03-26 16:56:20.246770 | orchestrator | 2025-03-26 16:56:20.246785 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-26 16:56:20.246800 | orchestrator | 2025-03-26 16:56:20.246844 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-26 16:56:20.246860 | orchestrator | Wednesday 26 March 2025 16:54:34 +0000 (0:00:00.469) 0:00:00.469 ******* 2025-03-26 16:56:20.246874 | orchestrator | ok: [testbed-node-3] 2025-03-26 16:56:20.246890 | orchestrator | ok: [testbed-node-4] 2025-03-26 16:56:20.246904 | orchestrator | ok: [testbed-node-5] 2025-03-26 16:56:20.246918 | orchestrator | ok: [testbed-node-0] 2025-03-26 16:56:20.246932 | orchestrator | ok: [testbed-node-1] 2025-03-26 16:56:20.246947 | orchestrator | ok: [testbed-node-2] 2025-03-26 16:56:20.246961 | orchestrator | 2025-03-26 16:56:20.246975 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-26 16:56:20.246989 | orchestrator | Wednesday 26 March 2025 16:54:36 +0000 (0:00:01.957) 0:00:02.427 ******* 2025-03-26 16:56:20.247003 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-03-26 16:56:20.247017 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-03-26 16:56:20.247031 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-03-26 16:56:20.247045 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-03-26 16:56:20.247059 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-03-26 16:56:20.247072 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-03-26 16:56:20.247086 | orchestrator | 2025-03-26 16:56:20.247100 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-03-26 16:56:20.247113 | orchestrator | 2025-03-26 16:56:20.247134 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-03-26 16:56:20.247149 | orchestrator | Wednesday 26 March 2025 16:54:38 +0000 (0:00:01.821) 0:00:04.249 ******* 2025-03-26 16:56:20.247164 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 16:56:20.247178 | orchestrator | 2025-03-26 16:56:20.247192 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-03-26 16:56:20.247206 | orchestrator | Wednesday 26 March 2025 16:54:42 +0000 (0:00:04.544) 0:00:08.793 ******* 2025-03-26 16:56:20.247220 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-03-26 16:56:20.247235 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-03-26 16:56:20.247249 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-03-26 16:56:20.247262 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-03-26 16:56:20.247277 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-03-26 16:56:20.247299 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-03-26 16:56:20.247320 | orchestrator | 2025-03-26 16:56:20.247335 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-03-26 16:56:20.247350 | orchestrator | Wednesday 26 March 2025 16:54:45 +0000 (0:00:02.864) 0:00:11.657 ******* 2025-03-26 16:56:20.247365 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-03-26 16:56:20.247385 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-03-26 16:56:20.247400 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-03-26 16:56:20.247416 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-03-26 16:56:20.247431 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-03-26 16:56:20.247446 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-03-26 16:56:20.247461 | orchestrator | 2025-03-26 16:56:20.247477 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-03-26 16:56:20.247492 | orchestrator | Wednesday 26 March 2025 16:54:49 +0000 (0:00:03.648) 0:00:15.305 ******* 2025-03-26 16:56:20.247522 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-03-26 16:56:20.247538 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:56:20.247555 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-03-26 16:56:20.247570 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:56:20.247586 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-03-26 16:56:20.247601 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:56:20.247616 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-03-26 16:56:20.247631 | orchestrator | skipping: [testbed-node-0] 2025-03-26 16:56:20.247645 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-03-26 16:56:20.247659 | orchestrator | skipping: [testbed-node-1] 2025-03-26 16:56:20.247673 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-03-26 16:56:20.247687 | orchestrator | skipping: [testbed-node-2] 2025-03-26 16:56:20.247700 | orchestrator | 2025-03-26 16:56:20.247714 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-03-26 16:56:20.247728 | orchestrator | Wednesday 26 March 2025 16:54:53 +0000 (0:00:03.876) 0:00:19.182 ******* 2025-03-26 16:56:20.247742 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:56:20.247756 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:56:20.247770 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:56:20.247783 | orchestrator | skipping: [testbed-node-0] 2025-03-26 16:56:20.247797 | orchestrator | skipping: [testbed-node-1] 2025-03-26 16:56:20.247811 | orchestrator | skipping: [testbed-node-2] 2025-03-26 16:56:20.247843 | orchestrator | 2025-03-26 16:56:20.247858 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-03-26 16:56:20.247871 | orchestrator | Wednesday 26 March 2025 16:54:53 +0000 (0:00:00.930) 0:00:20.112 ******* 2025-03-26 16:56:20.247899 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-26 16:56:20.247920 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-26 16:56:20.247935 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-26 16:56:20.247957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-26 16:56:20.247973 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-26 16:56:20.247994 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-26 16:56:20.248009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-26 16:56:20.248029 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-26 16:56:20.248044 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-26 16:56:20.248064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-26 16:56:20.248079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-26 16:56:20.248100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-26 16:56:20.248114 | orchestrator | 2025-03-26 16:56:20.248129 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-03-26 16:56:20.248143 | orchestrator | Wednesday 26 March 2025 16:54:57 +0000 (0:00:03.605) 0:00:23.717 ******* 2025-03-26 16:56:20.248157 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-26 16:56:20.248175 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-26 16:56:20.248196 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-26 16:56:20.248211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-26 16:56:20.248226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-26 16:56:20.248269 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-26 16:56:20.248286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-26 16:56:20.248300 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-26 16:56:20.248321 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-26 16:56:20.248345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-26 16:56:20.248366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-26 16:56:20.248381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-26 16:56:20.248395 | orchestrator | 2025-03-26 16:56:20.248409 | orchestrator | TASK [openvswitch : Copying over start-ovs file for openvswitch-vswitchd] ****** 2025-03-26 16:56:20.248423 | orchestrator | Wednesday 26 March 2025 16:55:01 +0000 (0:00:04.095) 0:00:27.813 ******* 2025-03-26 16:56:20.248437 | orchestrator | changed: [testbed-node-3] 2025-03-26 16:56:20.248451 | orchestrator | changed: [testbed-node-4] 2025-03-26 16:56:20.248465 | orchestrator | changed: [testbed-node-5] 2025-03-26 16:56:20.248486 | orchestrator | changed: [testbed-node-0] 2025-03-26 16:56:20.248499 | orchestrator | changed: [testbed-node-1] 2025-03-26 16:56:20.248513 | orchestrator | changed: [testbed-node-2] 2025-03-26 16:56:20.248527 | orchestrator | 2025-03-26 16:56:20.248541 | orchestrator | TASK [openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server] *** 2025-03-26 16:56:20.248555 | orchestrator | Wednesday 26 March 2025 16:55:06 +0000 (0:00:05.020) 0:00:32.833 ******* 2025-03-26 16:56:20.248569 | orchestrator | changed: [testbed-node-3] 2025-03-26 16:56:20.248583 | orchestrator | changed: [testbed-node-5] 2025-03-26 16:56:20.248596 | orchestrator | changed: [testbed-node-0] 2025-03-26 16:56:20.248610 | orchestrator | changed: [testbed-node-4] 2025-03-26 16:56:20.248624 | orchestrator | changed: [testbed-node-2] 2025-03-26 16:56:20.248638 | orchestrator | changed: [testbed-node-1] 2025-03-26 16:56:20.248652 | orchestrator | 2025-03-26 16:56:20.248666 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-03-26 16:56:20.248680 | orchestrator | Wednesday 26 March 2025 16:55:11 +0000 (0:00:04.407) 0:00:37.241 ******* 2025-03-26 16:56:20.248694 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:56:20.248708 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:56:20.248721 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:56:20.248735 | orchestrator | skipping: [testbed-node-0] 2025-03-26 16:56:20.248749 | orchestrator | skipping: [testbed-node-1] 2025-03-26 16:56:20.248763 | orchestrator | skipping: [testbed-node-2] 2025-03-26 16:56:20.248776 | orchestrator | 2025-03-26 16:56:20.248790 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-03-26 16:56:20.248804 | orchestrator | Wednesday 26 March 2025 16:55:14 +0000 (0:00:03.628) 0:00:40.870 ******* 2025-03-26 16:56:20.248851 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-26 16:56:20.248868 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-26 16:56:20.248888 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-26 16:56:20.248904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-26 16:56:20.248934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-26 16:56:20.248950 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-26 16:56:20.248964 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-26 16:56:20.248979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-26 16:56:20.249014 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-26 16:56:20.249038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-03-26 16:56:20.249053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-26 16:56:20.249067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-03-26 16:56:20.249081 | orchestrator | 2025-03-26 16:56:20.249095 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-03-26 16:56:20.249109 | orchestrator | Wednesday 26 March 2025 16:55:21 +0000 (0:00:06.460) 0:00:47.331 ******* 2025-03-26 16:56:20.249123 | orchestrator | 2025-03-26 16:56:20.249137 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-03-26 16:56:20.249151 | orchestrator | Wednesday 26 March 2025 16:55:21 +0000 (0:00:00.283) 0:00:47.614 ******* 2025-03-26 16:56:20.249164 | orchestrator | 2025-03-26 16:56:20.249178 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-03-26 16:56:20.249192 | orchestrator | Wednesday 26 March 2025 16:55:22 +0000 (0:00:00.918) 0:00:48.532 ******* 2025-03-26 16:56:20.249205 | orchestrator | 2025-03-26 16:56:20.249219 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-03-26 16:56:20.249233 | orchestrator | Wednesday 26 March 2025 16:55:22 +0000 (0:00:00.549) 0:00:49.082 ******* 2025-03-26 16:56:20.249246 | orchestrator | 2025-03-26 16:56:20.249260 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-03-26 16:56:20.249274 | orchestrator | Wednesday 26 March 2025 16:55:23 +0000 (0:00:00.532) 0:00:49.614 ******* 2025-03-26 16:56:20.249288 | orchestrator | 2025-03-26 16:56:20.249306 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-03-26 16:56:20.249320 | orchestrator | Wednesday 26 March 2025 16:55:23 +0000 (0:00:00.118) 0:00:49.732 ******* 2025-03-26 16:56:20.249334 | orchestrator | 2025-03-26 16:56:20.249348 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-03-26 16:56:20.249373 | orchestrator | Wednesday 26 March 2025 16:55:24 +0000 (0:00:00.540) 0:00:50.273 ******* 2025-03-26 16:56:20.249387 | orchestrator | changed: [testbed-node-0] 2025-03-26 16:56:20.249400 | orchestrator | changed: [testbed-node-5] 2025-03-26 16:56:20.249414 | orchestrator | changed: [testbed-node-1] 2025-03-26 16:56:20.249428 | orchestrator | changed: [testbed-node-2] 2025-03-26 16:56:20.249441 | orchestrator | changed: [testbed-node-3] 2025-03-26 16:56:20.249455 | orchestrator | changed: [testbed-node-4] 2025-03-26 16:56:20.249468 | orchestrator | 2025-03-26 16:56:20.249482 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-03-26 16:56:20.249496 | orchestrator | Wednesday 26 March 2025 16:55:35 +0000 (0:00:11.260) 0:01:01.533 ******* 2025-03-26 16:56:20.249515 | orchestrator | ok: [testbed-node-3] 2025-03-26 16:56:20.249530 | orchestrator | ok: [testbed-node-5] 2025-03-26 16:56:20.249544 | orchestrator | ok: [testbed-node-4] 2025-03-26 16:56:20.249558 | orchestrator | ok: [testbed-node-0] 2025-03-26 16:56:20.249572 | orchestrator | ok: [testbed-node-1] 2025-03-26 16:56:20.249585 | orchestrator | ok: [testbed-node-2] 2025-03-26 16:56:20.249599 | orchestrator | 2025-03-26 16:56:20.249613 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-03-26 16:56:20.249627 | orchestrator | Wednesday 26 March 2025 16:55:38 +0000 (0:00:02.974) 0:01:04.508 ******* 2025-03-26 16:56:20.249641 | orchestrator | changed: [testbed-node-1] 2025-03-26 16:56:20.249655 | orchestrator | changed: [testbed-node-2] 2025-03-26 16:56:20.249670 | orchestrator | changed: [testbed-node-3] 2025-03-26 16:56:20.249691 | orchestrator | changed: [testbed-node-4] 2025-03-26 16:56:20.249706 | orchestrator | changed: [testbed-node-5] 2025-03-26 16:56:20.249720 | orchestrator | changed: [testbed-node-0] 2025-03-26 16:56:20.249734 | orchestrator | 2025-03-26 16:56:20.249748 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-03-26 16:56:20.249762 | orchestrator | Wednesday 26 March 2025 16:55:49 +0000 (0:00:10.641) 0:01:15.150 ******* 2025-03-26 16:56:20.249776 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-03-26 16:56:20.249790 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-03-26 16:56:20.249804 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-03-26 16:56:20.249835 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-03-26 16:56:20.249850 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-03-26 16:56:20.249864 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-03-26 16:56:20.249878 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-03-26 16:56:20.249892 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-03-26 16:56:20.249906 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-03-26 16:56:20.249919 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-03-26 16:56:20.249933 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-03-26 16:56:20.249947 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-03-26 16:56:20.249961 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-03-26 16:56:20.249975 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-03-26 16:56:20.249996 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-03-26 16:56:20.250010 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-03-26 16:56:20.250073 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-03-26 16:56:20.250093 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-03-26 16:56:20.250107 | orchestrator | 2025-03-26 16:56:20.250121 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-03-26 16:56:20.250135 | orchestrator | Wednesday 26 March 2025 16:55:59 +0000 (0:00:10.608) 0:01:25.758 ******* 2025-03-26 16:56:20.250149 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-03-26 16:56:20.250163 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:56:20.250178 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-03-26 16:56:20.250192 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:56:20.250206 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-03-26 16:56:20.250220 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:56:20.250233 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-03-26 16:56:20.250248 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-03-26 16:56:20.250261 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-03-26 16:56:20.250275 | orchestrator | 2025-03-26 16:56:20.250289 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-03-26 16:56:20.250303 | orchestrator | Wednesday 26 March 2025 16:56:03 +0000 (0:00:03.701) 0:01:29.460 ******* 2025-03-26 16:56:20.250317 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-03-26 16:56:20.250331 | orchestrator | skipping: [testbed-node-3] 2025-03-26 16:56:20.250345 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-03-26 16:56:20.250359 | orchestrator | skipping: [testbed-node-4] 2025-03-26 16:56:20.250374 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-03-26 16:56:20.250388 | orchestrator | skipping: [testbed-node-5] 2025-03-26 16:56:20.250402 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-03-26 16:56:20.250423 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-03-26 16:56:20.262073 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-03-26 16:56:20.262175 | orchestrator | 2025-03-26 16:56:20.262193 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-03-26 16:56:20.262207 | orchestrator | Wednesday 26 March 2025 16:56:07 +0000 (0:00:04.540) 0:01:34.000 ******* 2025-03-26 16:56:20.262220 | orchestrator | changed: [testbed-node-3] 2025-03-26 16:56:20.262233 | orchestrator | changed: [testbed-node-4] 2025-03-26 16:56:20.262246 | orchestrator | changed: [testbed-node-5] 2025-03-26 16:56:20.262258 | orchestrator | changed: [testbed-node-0] 2025-03-26 16:56:20.262271 | orchestrator | changed: [testbed-node-1] 2025-03-26 16:56:20.262283 | orchestrator | changed: [testbed-node-2] 2025-03-26 16:56:20.262295 | orchestrator | 2025-03-26 16:56:20.262308 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-26 16:56:20.262322 | orchestrator | testbed-node-0 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-26 16:56:20.262336 | orchestrator | testbed-node-1 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-26 16:56:20.262348 | orchestrator | testbed-node-2 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-03-26 16:56:20.262361 | orchestrator | testbed-node-3 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-26 16:56:20.262401 | orchestrator | testbed-node-4 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-26 16:56:20.262427 | orchestrator | testbed-node-5 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-03-26 16:56:20.262440 | orchestrator | 2025-03-26 16:56:20.262452 | orchestrator | 2025-03-26 16:56:20.262465 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-26 16:56:20.262477 | orchestrator | Wednesday 26 March 2025 16:56:17 +0000 (0:00:09.398) 0:01:43.398 ******* 2025-03-26 16:56:20.262490 | orchestrator | =============================================================================== 2025-03-26 16:56:20.262502 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 20.04s 2025-03-26 16:56:20.262514 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.26s 2025-03-26 16:56:20.262526 | orchestrator | openvswitch : Set system-id, hostname and hw-offload ------------------- 10.61s 2025-03-26 16:56:20.262538 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 6.46s 2025-03-26 16:56:20.262551 | orchestrator | openvswitch : Copying over start-ovs file for openvswitch-vswitchd ------ 5.02s 2025-03-26 16:56:20.262563 | orchestrator | openvswitch : include_tasks --------------------------------------------- 4.54s 2025-03-26 16:56:20.262575 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.54s 2025-03-26 16:56:20.262587 | orchestrator | openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server --- 4.41s 2025-03-26 16:56:20.262601 | orchestrator | openvswitch : Copying over config.json files for services --------------- 4.10s 2025-03-26 16:56:20.262614 | orchestrator | module-load : Drop module persistence ----------------------------------- 3.88s 2025-03-26 16:56:20.262629 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.70s 2025-03-26 16:56:20.262642 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 3.65s 2025-03-26 16:56:20.262660 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 3.63s 2025-03-26 16:56:20.262673 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 3.61s 2025-03-26 16:56:20.262687 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.97s 2025-03-26 16:56:20.262700 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 2.94s 2025-03-26 16:56:20.262713 | orchestrator | module-load : Load modules ---------------------------------------------- 2.86s 2025-03-26 16:56:20.262727 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.96s 2025-03-26 16:56:20.262741 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.82s 2025-03-26 16:56:20.262754 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.93s 2025-03-26 16:56:20.262782 | orchestrator | 2025-03-26 16:56:20 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:56:20.265634 | orchestrator | 2025-03-26 16:56:20 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:56:20.269462 | orchestrator | 2025-03-26 16:56:20 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:56:20.270950 | orchestrator | 2025-03-26 16:56:20 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:56:20.274665 | orchestrator | 2025-03-26 16:56:20 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:56:23.342401 | orchestrator | 2025-03-26 16:56:20 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:56:23.342536 | orchestrator | 2025-03-26 16:56:23 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:56:23.346551 | orchestrator | 2025-03-26 16:56:23 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:56:23.347308 | orchestrator | 2025-03-26 16:56:23 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:56:23.348685 | orchestrator | 2025-03-26 16:56:23 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:56:23.354347 | orchestrator | 2025-03-26 16:56:23 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:56:23.355194 | orchestrator | 2025-03-26 16:56:23 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:56:26.406749 | orchestrator | 2025-03-26 16:56:26 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:56:26.407714 | orchestrator | 2025-03-26 16:56:26 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:56:26.408995 | orchestrator | 2025-03-26 16:56:26 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:56:26.410119 | orchestrator | 2025-03-26 16:56:26 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:56:26.411250 | orchestrator | 2025-03-26 16:56:26 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:56:26.411409 | orchestrator | 2025-03-26 16:56:26 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:56:29.456060 | orchestrator | 2025-03-26 16:56:29 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:56:29.457413 | orchestrator | 2025-03-26 16:56:29 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:56:29.457441 | orchestrator | 2025-03-26 16:56:29 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:56:29.457462 | orchestrator | 2025-03-26 16:56:29 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:56:32.509092 | orchestrator | 2025-03-26 16:56:29 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:56:32.509216 | orchestrator | 2025-03-26 16:56:29 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:56:32.509257 | orchestrator | 2025-03-26 16:56:32 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:56:32.511440 | orchestrator | 2025-03-26 16:56:32 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:56:32.511565 | orchestrator | 2025-03-26 16:56:32 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:56:32.511812 | orchestrator | 2025-03-26 16:56:32 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:56:32.512499 | orchestrator | 2025-03-26 16:56:32 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:56:35.547241 | orchestrator | 2025-03-26 16:56:32 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:56:35.547394 | orchestrator | 2025-03-26 16:56:35 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:56:35.552597 | orchestrator | 2025-03-26 16:56:35 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:56:35.552632 | orchestrator | 2025-03-26 16:56:35 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:56:35.552655 | orchestrator | 2025-03-26 16:56:35 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:56:38.599970 | orchestrator | 2025-03-26 16:56:35 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:56:38.600057 | orchestrator | 2025-03-26 16:56:35 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:56:38.600104 | orchestrator | 2025-03-26 16:56:38 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:56:38.600435 | orchestrator | 2025-03-26 16:56:38 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:56:38.607464 | orchestrator | 2025-03-26 16:56:38 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:56:38.608787 | orchestrator | 2025-03-26 16:56:38 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:56:38.612125 | orchestrator | 2025-03-26 16:56:38 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:56:41.658652 | orchestrator | 2025-03-26 16:56:38 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:56:41.658777 | orchestrator | 2025-03-26 16:56:41 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:56:41.663910 | orchestrator | 2025-03-26 16:56:41 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:56:41.665765 | orchestrator | 2025-03-26 16:56:41 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:56:41.665796 | orchestrator | 2025-03-26 16:56:41 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:56:41.666966 | orchestrator | 2025-03-26 16:56:41 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:56:44.735597 | orchestrator | 2025-03-26 16:56:41 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:56:44.735720 | orchestrator | 2025-03-26 16:56:44 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:56:44.738232 | orchestrator | 2025-03-26 16:56:44 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:56:44.739537 | orchestrator | 2025-03-26 16:56:44 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:56:44.740494 | orchestrator | 2025-03-26 16:56:44 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:56:44.744912 | orchestrator | 2025-03-26 16:56:44 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:56:47.805409 | orchestrator | 2025-03-26 16:56:44 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:56:47.805529 | orchestrator | 2025-03-26 16:56:47 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:56:47.806896 | orchestrator | 2025-03-26 16:56:47 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:56:47.808186 | orchestrator | 2025-03-26 16:56:47 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:56:47.810680 | orchestrator | 2025-03-26 16:56:47 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:56:47.812050 | orchestrator | 2025-03-26 16:56:47 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:56:50.864514 | orchestrator | 2025-03-26 16:56:47 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:56:50.864675 | orchestrator | 2025-03-26 16:56:50 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:56:50.864898 | orchestrator | 2025-03-26 16:56:50 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:56:50.864929 | orchestrator | 2025-03-26 16:56:50 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:56:50.866101 | orchestrator | 2025-03-26 16:56:50 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:56:50.867184 | orchestrator | 2025-03-26 16:56:50 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:56:53.914480 | orchestrator | 2025-03-26 16:56:50 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:56:53.914611 | orchestrator | 2025-03-26 16:56:53 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:56:53.915286 | orchestrator | 2025-03-26 16:56:53 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:56:53.916615 | orchestrator | 2025-03-26 16:56:53 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:56:53.917748 | orchestrator | 2025-03-26 16:56:53 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:56:53.919103 | orchestrator | 2025-03-26 16:56:53 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:56:56.966613 | orchestrator | 2025-03-26 16:56:53 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:56:56.966745 | orchestrator | 2025-03-26 16:56:56 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:56:56.968055 | orchestrator | 2025-03-26 16:56:56 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:56:56.969212 | orchestrator | 2025-03-26 16:56:56 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:56:56.970731 | orchestrator | 2025-03-26 16:56:56 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:56:56.972510 | orchestrator | 2025-03-26 16:56:56 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:56:56.972700 | orchestrator | 2025-03-26 16:56:56 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:57:00.031840 | orchestrator | 2025-03-26 16:57:00 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:57:00.035752 | orchestrator | 2025-03-26 16:57:00 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:57:00.035803 | orchestrator | 2025-03-26 16:57:00 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:57:00.053542 | orchestrator | 2025-03-26 16:57:00 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:57:00.055198 | orchestrator | 2025-03-26 16:57:00 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:57:03.107905 | orchestrator | 2025-03-26 16:57:00 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:57:03.108045 | orchestrator | 2025-03-26 16:57:03 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:57:03.108229 | orchestrator | 2025-03-26 16:57:03 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:57:03.110162 | orchestrator | 2025-03-26 16:57:03 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:57:03.114761 | orchestrator | 2025-03-26 16:57:03 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:57:03.115280 | orchestrator | 2025-03-26 16:57:03 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:57:06.175714 | orchestrator | 2025-03-26 16:57:03 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:57:06.175891 | orchestrator | 2025-03-26 16:57:06 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:57:06.176672 | orchestrator | 2025-03-26 16:57:06 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:57:06.178528 | orchestrator | 2025-03-26 16:57:06 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:57:06.179293 | orchestrator | 2025-03-26 16:57:06 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:57:06.180001 | orchestrator | 2025-03-26 16:57:06 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:57:09.229514 | orchestrator | 2025-03-26 16:57:06 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:57:09.229633 | orchestrator | 2025-03-26 16:57:09 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:57:09.231043 | orchestrator | 2025-03-26 16:57:09 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:57:09.234205 | orchestrator | 2025-03-26 16:57:09 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:57:09.236741 | orchestrator | 2025-03-26 16:57:09 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:57:12.289526 | orchestrator | 2025-03-26 16:57:09 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:57:12.289659 | orchestrator | 2025-03-26 16:57:09 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:57:12.289698 | orchestrator | 2025-03-26 16:57:12 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:57:12.295695 | orchestrator | 2025-03-26 16:57:12 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:57:12.302688 | orchestrator | 2025-03-26 16:57:12 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:57:12.303280 | orchestrator | 2025-03-26 16:57:12 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:57:12.308096 | orchestrator | 2025-03-26 16:57:12 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:57:15.387521 | orchestrator | 2025-03-26 16:57:12 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:57:15.387664 | orchestrator | 2025-03-26 16:57:15 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:57:15.391163 | orchestrator | 2025-03-26 16:57:15 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:57:15.399688 | orchestrator | 2025-03-26 16:57:15 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:57:15.400068 | orchestrator | 2025-03-26 16:57:15 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:57:15.402737 | orchestrator | 2025-03-26 16:57:15 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:57:18.453712 | orchestrator | 2025-03-26 16:57:15 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:57:18.453856 | orchestrator | 2025-03-26 16:57:18 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:57:18.454321 | orchestrator | 2025-03-26 16:57:18 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:57:18.457134 | orchestrator | 2025-03-26 16:57:18 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:57:18.458919 | orchestrator | 2025-03-26 16:57:18 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:57:18.460371 | orchestrator | 2025-03-26 16:57:18 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:57:21.535620 | orchestrator | 2025-03-26 16:57:18 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:57:21.535793 | orchestrator | 2025-03-26 16:57:21 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:57:21.536825 | orchestrator | 2025-03-26 16:57:21 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:57:21.536905 | orchestrator | 2025-03-26 16:57:21 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:57:21.536931 | orchestrator | 2025-03-26 16:57:21 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:57:21.537464 | orchestrator | 2025-03-26 16:57:21 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:57:24.588244 | orchestrator | 2025-03-26 16:57:21 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:57:24.588382 | orchestrator | 2025-03-26 16:57:24 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:57:24.592307 | orchestrator | 2025-03-26 16:57:24 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:57:24.593925 | orchestrator | 2025-03-26 16:57:24 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:57:24.595312 | orchestrator | 2025-03-26 16:57:24 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:57:24.596407 | orchestrator | 2025-03-26 16:57:24 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:57:27.647158 | orchestrator | 2025-03-26 16:57:24 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:57:27.647296 | orchestrator | 2025-03-26 16:57:27 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:57:27.648186 | orchestrator | 2025-03-26 16:57:27 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:57:27.648223 | orchestrator | 2025-03-26 16:57:27 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:57:27.649151 | orchestrator | 2025-03-26 16:57:27 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:57:27.650748 | orchestrator | 2025-03-26 16:57:27 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:57:30.712262 | orchestrator | 2025-03-26 16:57:27 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:57:30.712394 | orchestrator | 2025-03-26 16:57:30 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:57:30.721184 | orchestrator | 2025-03-26 16:57:30 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:57:30.723958 | orchestrator | 2025-03-26 16:57:30 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:57:30.726935 | orchestrator | 2025-03-26 16:57:30 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:57:30.728726 | orchestrator | 2025-03-26 16:57:30 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:57:33.762142 | orchestrator | 2025-03-26 16:57:30 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:57:33.762266 | orchestrator | 2025-03-26 16:57:33 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:57:33.762898 | orchestrator | 2025-03-26 16:57:33 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:57:33.763554 | orchestrator | 2025-03-26 16:57:33 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:57:33.764554 | orchestrator | 2025-03-26 16:57:33 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:57:33.765533 | orchestrator | 2025-03-26 16:57:33 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:57:33.765643 | orchestrator | 2025-03-26 16:57:33 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:57:36.819349 | orchestrator | 2025-03-26 16:57:36 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:57:36.820453 | orchestrator | 2025-03-26 16:57:36 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:57:36.822189 | orchestrator | 2025-03-26 16:57:36 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:57:36.823298 | orchestrator | 2025-03-26 16:57:36 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:57:36.824706 | orchestrator | 2025-03-26 16:57:36 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:57:39.873567 | orchestrator | 2025-03-26 16:57:36 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:57:39.873684 | orchestrator | 2025-03-26 16:57:39 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:57:39.877180 | orchestrator | 2025-03-26 16:57:39 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:57:39.877218 | orchestrator | 2025-03-26 16:57:39 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:57:39.878095 | orchestrator | 2025-03-26 16:57:39 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:57:39.879270 | orchestrator | 2025-03-26 16:57:39 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:57:42.926713 | orchestrator | 2025-03-26 16:57:39 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:57:42.926821 | orchestrator | 2025-03-26 16:57:42 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:57:42.930599 | orchestrator | 2025-03-26 16:57:42 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:57:45.976090 | orchestrator | 2025-03-26 16:57:42 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:57:45.976211 | orchestrator | 2025-03-26 16:57:42 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:57:45.976231 | orchestrator | 2025-03-26 16:57:42 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:57:45.976265 | orchestrator | 2025-03-26 16:57:42 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:57:45.976300 | orchestrator | 2025-03-26 16:57:45 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:57:45.982267 | orchestrator | 2025-03-26 16:57:45 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:57:45.982323 | orchestrator | 2025-03-26 16:57:45 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state STARTED 2025-03-26 16:57:45.982629 | orchestrator | 2025-03-26 16:57:45 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:57:45.983478 | orchestrator | 2025-03-26 16:57:45 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:57:45.983562 | orchestrator | 2025-03-26 16:57:45 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:57:49.035090 | orchestrator | 2025-03-26 16:57:49 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:57:49.038707 | orchestrator | 2025-03-26 16:57:49 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:57:49.041244 | orchestrator | 2025-03-26 16:57:49.041286 | orchestrator | 2025-03-26 16:57:49.041326 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-03-26 16:57:49.041342 | orchestrator | 2025-03-26 16:57:49.041357 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-03-26 16:57:49.041370 | orchestrator | Wednesday 26 March 2025 16:55:06 +0000 (0:00:01.050) 0:00:01.050 ******* 2025-03-26 16:57:49.041384 | orchestrator | ok: [localhost] => { 2025-03-26 16:57:49.041401 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-03-26 16:57:49.041415 | orchestrator | } 2025-03-26 16:57:49.041429 | orchestrator | 2025-03-26 16:57:49.041443 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-03-26 16:57:49.041457 | orchestrator | Wednesday 26 March 2025 16:55:06 +0000 (0:00:00.114) 0:00:01.164 ******* 2025-03-26 16:57:49.041472 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-03-26 16:57:49.041487 | orchestrator | ...ignoring 2025-03-26 16:57:49.041501 | orchestrator | 2025-03-26 16:57:49.041516 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-03-26 16:57:49.041529 | orchestrator | Wednesday 26 March 2025 16:55:10 +0000 (0:00:04.638) 0:00:05.802 ******* 2025-03-26 16:57:49.041543 | orchestrator | skipping: [localhost] 2025-03-26 16:57:49.041557 | orchestrator | 2025-03-26 16:57:49.041571 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-03-26 16:57:49.041585 | orchestrator | Wednesday 26 March 2025 16:55:11 +0000 (0:00:00.067) 0:00:05.870 ******* 2025-03-26 16:57:49.041599 | orchestrator | ok: [localhost] 2025-03-26 16:57:49.041613 | orchestrator | 2025-03-26 16:57:49.041626 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-26 16:57:49.041640 | orchestrator | 2025-03-26 16:57:49.041654 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-26 16:57:49.041667 | orchestrator | Wednesday 26 March 2025 16:55:11 +0000 (0:00:00.187) 0:00:06.058 ******* 2025-03-26 16:57:49.041681 | orchestrator | ok: [testbed-node-0] 2025-03-26 16:57:49.041695 | orchestrator | ok: [testbed-node-1] 2025-03-26 16:57:49.041708 | orchestrator | ok: [testbed-node-2] 2025-03-26 16:57:49.041722 | orchestrator | 2025-03-26 16:57:49.041736 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-26 16:57:49.041750 | orchestrator | Wednesday 26 March 2025 16:55:12 +0000 (0:00:00.928) 0:00:06.986 ******* 2025-03-26 16:57:49.041764 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-03-26 16:57:49.041778 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-03-26 16:57:49.041792 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-03-26 16:57:49.041806 | orchestrator | 2025-03-26 16:57:49.041820 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-03-26 16:57:49.041834 | orchestrator | 2025-03-26 16:57:49.041847 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-03-26 16:57:49.041861 | orchestrator | Wednesday 26 March 2025 16:55:14 +0000 (0:00:01.894) 0:00:08.881 ******* 2025-03-26 16:57:49.041875 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 16:57:49.041912 | orchestrator | 2025-03-26 16:57:49.041927 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-03-26 16:57:49.041941 | orchestrator | Wednesday 26 March 2025 16:55:18 +0000 (0:00:04.023) 0:00:12.905 ******* 2025-03-26 16:57:49.041954 | orchestrator | ok: [testbed-node-0] 2025-03-26 16:57:49.041968 | orchestrator | 2025-03-26 16:57:49.041982 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-03-26 16:57:49.041996 | orchestrator | Wednesday 26 March 2025 16:55:19 +0000 (0:00:01.824) 0:00:14.730 ******* 2025-03-26 16:57:49.042010 | orchestrator | skipping: [testbed-node-0] 2025-03-26 16:57:49.042082 | orchestrator | 2025-03-26 16:57:49.042097 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-03-26 16:57:49.042129 | orchestrator | Wednesday 26 March 2025 16:55:21 +0000 (0:00:01.894) 0:00:16.624 ******* 2025-03-26 16:57:49.042143 | orchestrator | skipping: [testbed-node-0] 2025-03-26 16:57:49.042157 | orchestrator | 2025-03-26 16:57:49.042171 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-03-26 16:57:49.042195 | orchestrator | Wednesday 26 March 2025 16:55:23 +0000 (0:00:02.044) 0:00:18.669 ******* 2025-03-26 16:57:49.042209 | orchestrator | skipping: [testbed-node-0] 2025-03-26 16:57:49.042223 | orchestrator | 2025-03-26 16:57:49.042237 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-03-26 16:57:49.042252 | orchestrator | Wednesday 26 March 2025 16:55:24 +0000 (0:00:01.046) 0:00:19.715 ******* 2025-03-26 16:57:49.042266 | orchestrator | skipping: [testbed-node-0] 2025-03-26 16:57:49.042280 | orchestrator | 2025-03-26 16:57:49.042293 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-03-26 16:57:49.042307 | orchestrator | Wednesday 26 March 2025 16:55:26 +0000 (0:00:01.371) 0:00:21.087 ******* 2025-03-26 16:57:49.042321 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 16:57:49.042336 | orchestrator | 2025-03-26 16:57:49.042350 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-03-26 16:57:49.042364 | orchestrator | Wednesday 26 March 2025 16:55:28 +0000 (0:00:02.411) 0:00:23.499 ******* 2025-03-26 16:57:49.042377 | orchestrator | ok: [testbed-node-0] 2025-03-26 16:57:49.042391 | orchestrator | 2025-03-26 16:57:49.042405 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-03-26 16:57:49.042419 | orchestrator | Wednesday 26 March 2025 16:55:29 +0000 (0:00:00.858) 0:00:24.358 ******* 2025-03-26 16:57:49.042433 | orchestrator | skipping: [testbed-node-0] 2025-03-26 16:57:49.042446 | orchestrator | 2025-03-26 16:57:49.042460 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-03-26 16:57:49.042474 | orchestrator | Wednesday 26 March 2025 16:55:29 +0000 (0:00:00.367) 0:00:24.725 ******* 2025-03-26 16:57:49.042488 | orchestrator | skipping: [testbed-node-0] 2025-03-26 16:57:49.042502 | orchestrator | 2025-03-26 16:57:49.042523 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-03-26 16:57:49.042537 | orchestrator | Wednesday 26 March 2025 16:55:30 +0000 (0:00:00.473) 0:00:25.198 ******* 2025-03-26 16:57:49.042554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-26 16:57:49.042572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-26 16:57:49.042598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-26 16:57:49.042623 | orchestrator | 2025-03-26 16:57:49.042647 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-03-26 16:57:49.042670 | orchestrator | Wednesday 26 March 2025 16:55:31 +0000 (0:00:01.318) 0:00:26.517 ******* 2025-03-26 16:57:49.042710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-26 16:57:49.042737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-26 16:57:49.042775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-26 16:57:49.042799 | orchestrator | 2025-03-26 16:57:49.042818 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-03-26 16:57:49.042832 | orchestrator | Wednesday 26 March 2025 16:55:33 +0000 (0:00:01.925) 0:00:28.443 ******* 2025-03-26 16:57:49.042846 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-03-26 16:57:49.042860 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-03-26 16:57:49.042874 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-03-26 16:57:49.042909 | orchestrator | 2025-03-26 16:57:49.042924 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-03-26 16:57:49.042938 | orchestrator | Wednesday 26 March 2025 16:55:35 +0000 (0:00:02.013) 0:00:30.457 ******* 2025-03-26 16:57:49.042952 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-03-26 16:57:49.042966 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-03-26 16:57:49.042980 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-03-26 16:57:49.042994 | orchestrator | 2025-03-26 16:57:49.043008 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-03-26 16:57:49.043022 | orchestrator | Wednesday 26 March 2025 16:55:40 +0000 (0:00:04.545) 0:00:35.002 ******* 2025-03-26 16:57:49.043036 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-03-26 16:57:49.043049 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-03-26 16:57:49.043063 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-03-26 16:57:49.043077 | orchestrator | 2025-03-26 16:57:49.043099 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-03-26 16:57:49.043114 | orchestrator | Wednesday 26 March 2025 16:55:43 +0000 (0:00:03.470) 0:00:38.472 ******* 2025-03-26 16:57:49.043128 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-03-26 16:57:49.043142 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-03-26 16:57:49.043156 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-03-26 16:57:49.043170 | orchestrator | 2025-03-26 16:57:49.043184 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-03-26 16:57:49.043198 | orchestrator | Wednesday 26 March 2025 16:55:46 +0000 (0:00:02.558) 0:00:41.031 ******* 2025-03-26 16:57:49.043212 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-03-26 16:57:49.043226 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-03-26 16:57:49.043247 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-03-26 16:57:49.043262 | orchestrator | 2025-03-26 16:57:49.043276 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-03-26 16:57:49.043289 | orchestrator | Wednesday 26 March 2025 16:55:48 +0000 (0:00:01.858) 0:00:42.890 ******* 2025-03-26 16:57:49.043303 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-03-26 16:57:49.043317 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-03-26 16:57:49.043331 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-03-26 16:57:49.043345 | orchestrator | 2025-03-26 16:57:49.043359 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-03-26 16:57:49.043379 | orchestrator | Wednesday 26 March 2025 16:55:50 +0000 (0:00:02.879) 0:00:45.770 ******* 2025-03-26 16:57:49.043394 | orchestrator | skipping: [testbed-node-0] 2025-03-26 16:57:49.043408 | orchestrator | skipping: [testbed-node-1] 2025-03-26 16:57:49.043422 | orchestrator | skipping: [testbed-node-2] 2025-03-26 16:57:49.043436 | orchestrator | 2025-03-26 16:57:49.043451 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-03-26 16:57:49.043465 | orchestrator | Wednesday 26 March 2025 16:55:53 +0000 (0:00:02.069) 0:00:47.839 ******* 2025-03-26 16:57:49.043479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-26 16:57:49.043495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-26 16:57:49.043519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-26 16:57:49.043541 | orchestrator | 2025-03-26 16:57:49.043556 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-03-26 16:57:49.043569 | orchestrator | Wednesday 26 March 2025 16:55:55 +0000 (0:00:02.213) 0:00:50.052 ******* 2025-03-26 16:57:49.043583 | orchestrator | changed: [testbed-node-0] 2025-03-26 16:57:49.043597 | orchestrator | changed: [testbed-node-1] 2025-03-26 16:57:49.043611 | orchestrator | changed: [testbed-node-2] 2025-03-26 16:57:49.043625 | orchestrator | 2025-03-26 16:57:49.043638 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-03-26 16:57:49.043652 | orchestrator | Wednesday 26 March 2025 16:55:57 +0000 (0:00:01.834) 0:00:51.886 ******* 2025-03-26 16:57:49.043666 | orchestrator | changed: [testbed-node-1] 2025-03-26 16:57:49.043679 | orchestrator | changed: [testbed-node-0] 2025-03-26 16:57:49.043693 | orchestrator | changed: [testbed-node-2] 2025-03-26 16:57:49.043707 | orchestrator | 2025-03-26 16:57:49.043721 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-03-26 16:57:49.043735 | orchestrator | Wednesday 26 March 2025 16:56:02 +0000 (0:00:05.887) 0:00:57.773 ******* 2025-03-26 16:57:49.043748 | orchestrator | changed: [testbed-node-0] 2025-03-26 16:57:49.043762 | orchestrator | changed: [testbed-node-1] 2025-03-26 16:57:49.043776 | orchestrator | changed: [testbed-node-2] 2025-03-26 16:57:49.043790 | orchestrator | 2025-03-26 16:57:49.043803 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-03-26 16:57:49.043817 | orchestrator | 2025-03-26 16:57:49.043831 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-03-26 16:57:49.043845 | orchestrator | Wednesday 26 March 2025 16:56:03 +0000 (0:00:00.395) 0:00:58.169 ******* 2025-03-26 16:57:49.043858 | orchestrator | ok: [testbed-node-0] 2025-03-26 16:57:49.043872 | orchestrator | 2025-03-26 16:57:49.043902 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-03-26 16:57:49.043917 | orchestrator | Wednesday 26 March 2025 16:56:04 +0000 (0:00:00.919) 0:00:59.088 ******* 2025-03-26 16:57:49.043931 | orchestrator | skipping: [testbed-node-0] 2025-03-26 16:57:49.043944 | orchestrator | 2025-03-26 16:57:49.043958 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-03-26 16:57:49.043972 | orchestrator | Wednesday 26 March 2025 16:56:04 +0000 (0:00:00.325) 0:00:59.414 ******* 2025-03-26 16:57:49.043986 | orchestrator | changed: [testbed-node-0] 2025-03-26 16:57:49.044000 | orchestrator | 2025-03-26 16:57:49.044014 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-03-26 16:57:49.044028 | orchestrator | Wednesday 26 March 2025 16:56:06 +0000 (0:00:02.073) 0:01:01.487 ******* 2025-03-26 16:57:49.044041 | orchestrator | changed: [testbed-node-0] 2025-03-26 16:57:49.044055 | orchestrator | 2025-03-26 16:57:49.044069 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-03-26 16:57:49.044083 | orchestrator | 2025-03-26 16:57:49.044097 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-03-26 16:57:49.044111 | orchestrator | Wednesday 26 March 2025 16:57:03 +0000 (0:00:56.588) 0:01:58.076 ******* 2025-03-26 16:57:49.044125 | orchestrator | ok: [testbed-node-1] 2025-03-26 16:57:49.044138 | orchestrator | 2025-03-26 16:57:49.044152 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-03-26 16:57:49.044166 | orchestrator | Wednesday 26 March 2025 16:57:03 +0000 (0:00:00.734) 0:01:58.810 ******* 2025-03-26 16:57:49.044187 | orchestrator | skipping: [testbed-node-1] 2025-03-26 16:57:49.044201 | orchestrator | 2025-03-26 16:57:49.044215 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-03-26 16:57:49.044228 | orchestrator | Wednesday 26 March 2025 16:57:04 +0000 (0:00:00.375) 0:01:59.186 ******* 2025-03-26 16:57:49.044242 | orchestrator | changed: [testbed-node-1] 2025-03-26 16:57:49.044256 | orchestrator | 2025-03-26 16:57:49.044270 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-03-26 16:57:49.044283 | orchestrator | Wednesday 26 March 2025 16:57:06 +0000 (0:00:02.379) 0:02:01.565 ******* 2025-03-26 16:57:49.044297 | orchestrator | changed: [testbed-node-1] 2025-03-26 16:57:49.044311 | orchestrator | 2025-03-26 16:57:49.044325 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-03-26 16:57:49.044339 | orchestrator | 2025-03-26 16:57:49.044353 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-03-26 16:57:49.044366 | orchestrator | Wednesday 26 March 2025 16:57:21 +0000 (0:00:15.232) 0:02:16.798 ******* 2025-03-26 16:57:49.044380 | orchestrator | ok: [testbed-node-2] 2025-03-26 16:57:49.044394 | orchestrator | 2025-03-26 16:57:49.044408 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-03-26 16:57:49.044422 | orchestrator | Wednesday 26 March 2025 16:57:22 +0000 (0:00:00.683) 0:02:17.482 ******* 2025-03-26 16:57:49.044435 | orchestrator | skipping: [testbed-node-2] 2025-03-26 16:57:49.044455 | orchestrator | 2025-03-26 16:57:49.044474 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-03-26 16:57:49.044494 | orchestrator | Wednesday 26 March 2025 16:57:23 +0000 (0:00:00.365) 0:02:17.848 ******* 2025-03-26 16:57:49.044509 | orchestrator | changed: [testbed-node-2] 2025-03-26 16:57:49.044523 | orchestrator | 2025-03-26 16:57:49.044537 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-03-26 16:57:49.044551 | orchestrator | Wednesday 26 March 2025 16:57:30 +0000 (0:00:07.233) 0:02:25.081 ******* 2025-03-26 16:57:49.044565 | orchestrator | changed: [testbed-node-2] 2025-03-26 16:57:49.044579 | orchestrator | 2025-03-26 16:57:49.044593 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-03-26 16:57:49.044607 | orchestrator | 2025-03-26 16:57:49.044621 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-03-26 16:57:49.044634 | orchestrator | Wednesday 26 March 2025 16:57:41 +0000 (0:00:11.178) 0:02:36.259 ******* 2025-03-26 16:57:49.044648 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 16:57:49.044662 | orchestrator | 2025-03-26 16:57:49.044676 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-03-26 16:57:49.044689 | orchestrator | Wednesday 26 March 2025 16:57:42 +0000 (0:00:01.063) 0:02:37.323 ******* 2025-03-26 16:57:49.044703 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-03-26 16:57:49.044717 | orchestrator | enable_outward_rabbitmq_True 2025-03-26 16:57:49.044731 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-03-26 16:57:49.044745 | orchestrator | outward_rabbitmq_restart 2025-03-26 16:57:49.044759 | orchestrator | ok: [testbed-node-0] 2025-03-26 16:57:49.044772 | orchestrator | ok: [testbed-node-1] 2025-03-26 16:57:49.044786 | orchestrator | ok: [testbed-node-2] 2025-03-26 16:57:49.044800 | orchestrator | 2025-03-26 16:57:49.044813 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-03-26 16:57:49.044827 | orchestrator | skipping: no hosts matched 2025-03-26 16:57:49.044841 | orchestrator | 2025-03-26 16:57:49.044854 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-03-26 16:57:49.044868 | orchestrator | skipping: no hosts matched 2025-03-26 16:57:49.044901 | orchestrator | 2025-03-26 16:57:49.044916 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-03-26 16:57:49.044930 | orchestrator | skipping: no hosts matched 2025-03-26 16:57:49.044944 | orchestrator | 2025-03-26 16:57:49.044964 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-26 16:57:49.044979 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-03-26 16:57:49.044993 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-03-26 16:57:49.045007 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-26 16:57:49.045021 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-03-26 16:57:49.045035 | orchestrator | 2025-03-26 16:57:49.045049 | orchestrator | 2025-03-26 16:57:49.045063 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-26 16:57:49.045077 | orchestrator | Wednesday 26 March 2025 16:57:45 +0000 (0:00:03.003) 0:02:40.326 ******* 2025-03-26 16:57:49.045091 | orchestrator | =============================================================================== 2025-03-26 16:57:49.045104 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 83.00s 2025-03-26 16:57:49.045118 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 11.69s 2025-03-26 16:57:49.045132 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 5.89s 2025-03-26 16:57:49.045146 | orchestrator | Check RabbitMQ service -------------------------------------------------- 4.64s 2025-03-26 16:57:49.045159 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 4.55s 2025-03-26 16:57:49.045173 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 4.02s 2025-03-26 16:57:49.045187 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 3.47s 2025-03-26 16:57:49.045200 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.00s 2025-03-26 16:57:49.045214 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.88s 2025-03-26 16:57:49.045228 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.56s 2025-03-26 16:57:49.045242 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 2.41s 2025-03-26 16:57:49.045255 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.34s 2025-03-26 16:57:49.045269 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 2.21s 2025-03-26 16:57:49.045282 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 2.07s 2025-03-26 16:57:49.045296 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------- 2.04s 2025-03-26 16:57:49.045315 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.01s 2025-03-26 16:57:49.045329 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.93s 2025-03-26 16:57:49.045343 | orchestrator | rabbitmq : Get current RabbitMQ version --------------------------------- 1.90s 2025-03-26 16:57:49.045357 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.89s 2025-03-26 16:57:49.045371 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.86s 2025-03-26 16:57:49.045390 | orchestrator | 2025-03-26 16:57:49 | INFO  | Task 2613ea72-cf08-4aff-9610-14a207ce81b2 is in state SUCCESS 2025-03-26 16:57:49.048516 | orchestrator | 2025-03-26 16:57:49 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:57:49.049106 | orchestrator | 2025-03-26 16:57:49 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:57:52.105580 | orchestrator | 2025-03-26 16:57:49 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:57:52.105736 | orchestrator | 2025-03-26 16:57:52 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:57:52.107712 | orchestrator | 2025-03-26 16:57:52 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:57:52.111528 | orchestrator | 2025-03-26 16:57:52 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:57:52.115529 | orchestrator | 2025-03-26 16:57:52 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:57:55.166919 | orchestrator | 2025-03-26 16:57:52 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:57:55.167060 | orchestrator | 2025-03-26 16:57:55 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:57:55.170747 | orchestrator | 2025-03-26 16:57:55 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:57:55.172726 | orchestrator | 2025-03-26 16:57:55 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:57:55.172972 | orchestrator | 2025-03-26 16:57:55 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:57:58.224584 | orchestrator | 2025-03-26 16:57:55 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:57:58.224706 | orchestrator | 2025-03-26 16:57:58 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:57:58.225690 | orchestrator | 2025-03-26 16:57:58 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:57:58.227031 | orchestrator | 2025-03-26 16:57:58 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:57:58.228149 | orchestrator | 2025-03-26 16:57:58 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:58:01.298649 | orchestrator | 2025-03-26 16:57:58 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:58:01.298781 | orchestrator | 2025-03-26 16:58:01 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:58:01.299146 | orchestrator | 2025-03-26 16:58:01 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:58:01.299986 | orchestrator | 2025-03-26 16:58:01 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:58:01.302467 | orchestrator | 2025-03-26 16:58:01 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:58:01.303755 | orchestrator | 2025-03-26 16:58:01 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:58:04.359692 | orchestrator | 2025-03-26 16:58:04 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:58:04.360443 | orchestrator | 2025-03-26 16:58:04 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:58:04.364531 | orchestrator | 2025-03-26 16:58:04 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:58:04.364986 | orchestrator | 2025-03-26 16:58:04 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:58:04.365014 | orchestrator | 2025-03-26 16:58:04 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:58:07.410960 | orchestrator | 2025-03-26 16:58:07 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:58:07.411945 | orchestrator | 2025-03-26 16:58:07 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:58:07.413044 | orchestrator | 2025-03-26 16:58:07 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:58:07.414615 | orchestrator | 2025-03-26 16:58:07 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:58:07.414839 | orchestrator | 2025-03-26 16:58:07 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:58:10.465727 | orchestrator | 2025-03-26 16:58:10 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:58:10.477822 | orchestrator | 2025-03-26 16:58:10 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:58:13.516876 | orchestrator | 2025-03-26 16:58:10 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:58:13.517050 | orchestrator | 2025-03-26 16:58:10 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:58:13.517071 | orchestrator | 2025-03-26 16:58:10 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:58:13.517106 | orchestrator | 2025-03-26 16:58:13 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:58:13.517245 | orchestrator | 2025-03-26 16:58:13 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:58:13.518375 | orchestrator | 2025-03-26 16:58:13 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:58:13.519624 | orchestrator | 2025-03-26 16:58:13 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:58:16.585456 | orchestrator | 2025-03-26 16:58:13 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:58:16.585584 | orchestrator | 2025-03-26 16:58:16 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:58:16.585976 | orchestrator | 2025-03-26 16:58:16 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:58:16.586804 | orchestrator | 2025-03-26 16:58:16 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:58:16.587630 | orchestrator | 2025-03-26 16:58:16 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:58:16.587886 | orchestrator | 2025-03-26 16:58:16 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:58:19.641333 | orchestrator | 2025-03-26 16:58:19 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:58:19.643019 | orchestrator | 2025-03-26 16:58:19 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:58:19.647803 | orchestrator | 2025-03-26 16:58:19 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:58:19.650447 | orchestrator | 2025-03-26 16:58:19 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:58:19.650621 | orchestrator | 2025-03-26 16:58:19 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:58:22.704551 | orchestrator | 2025-03-26 16:58:22 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:58:22.705553 | orchestrator | 2025-03-26 16:58:22 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:58:22.709693 | orchestrator | 2025-03-26 16:58:22 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:58:22.710989 | orchestrator | 2025-03-26 16:58:22 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:58:25.755711 | orchestrator | 2025-03-26 16:58:22 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:58:25.755854 | orchestrator | 2025-03-26 16:58:25 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:58:25.756965 | orchestrator | 2025-03-26 16:58:25 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:58:25.758960 | orchestrator | 2025-03-26 16:58:25 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:58:25.760318 | orchestrator | 2025-03-26 16:58:25 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:58:28.805646 | orchestrator | 2025-03-26 16:58:25 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:58:28.805781 | orchestrator | 2025-03-26 16:58:28 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:58:28.809866 | orchestrator | 2025-03-26 16:58:28 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:58:28.809918 | orchestrator | 2025-03-26 16:58:28 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:58:31.869463 | orchestrator | 2025-03-26 16:58:28 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:58:31.869569 | orchestrator | 2025-03-26 16:58:28 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:58:31.869599 | orchestrator | 2025-03-26 16:58:31 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:58:31.869983 | orchestrator | 2025-03-26 16:58:31 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:58:31.871010 | orchestrator | 2025-03-26 16:58:31 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:58:31.872023 | orchestrator | 2025-03-26 16:58:31 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:58:31.872206 | orchestrator | 2025-03-26 16:58:31 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:58:34.915713 | orchestrator | 2025-03-26 16:58:34 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:58:34.916306 | orchestrator | 2025-03-26 16:58:34 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:58:34.917469 | orchestrator | 2025-03-26 16:58:34 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:58:34.918868 | orchestrator | 2025-03-26 16:58:34 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:58:37.981302 | orchestrator | 2025-03-26 16:58:34 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:58:37.981527 | orchestrator | 2025-03-26 16:58:37 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:58:37.981614 | orchestrator | 2025-03-26 16:58:37 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:58:37.982624 | orchestrator | 2025-03-26 16:58:37 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:58:37.983567 | orchestrator | 2025-03-26 16:58:37 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:58:37.984083 | orchestrator | 2025-03-26 16:58:37 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:58:41.044777 | orchestrator | 2025-03-26 16:58:41 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:58:41.045178 | orchestrator | 2025-03-26 16:58:41 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:58:41.046159 | orchestrator | 2025-03-26 16:58:41 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:58:41.046784 | orchestrator | 2025-03-26 16:58:41 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:58:44.092021 | orchestrator | 2025-03-26 16:58:41 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:58:44.092158 | orchestrator | 2025-03-26 16:58:44 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:58:44.093145 | orchestrator | 2025-03-26 16:58:44 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:58:44.094678 | orchestrator | 2025-03-26 16:58:44 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:58:44.096050 | orchestrator | 2025-03-26 16:58:44 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:58:47.148467 | orchestrator | 2025-03-26 16:58:44 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:58:47.148590 | orchestrator | 2025-03-26 16:58:47 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:58:47.148832 | orchestrator | 2025-03-26 16:58:47 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:58:47.152828 | orchestrator | 2025-03-26 16:58:47 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:58:47.155857 | orchestrator | 2025-03-26 16:58:47 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:58:50.206298 | orchestrator | 2025-03-26 16:58:47 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:58:50.206425 | orchestrator | 2025-03-26 16:58:50 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:58:50.207829 | orchestrator | 2025-03-26 16:58:50 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:58:50.207859 | orchestrator | 2025-03-26 16:58:50 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:58:50.207882 | orchestrator | 2025-03-26 16:58:50 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:58:53.269440 | orchestrator | 2025-03-26 16:58:50 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:58:53.269585 | orchestrator | 2025-03-26 16:58:53 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:58:53.270323 | orchestrator | 2025-03-26 16:58:53 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:58:53.272035 | orchestrator | 2025-03-26 16:58:53 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:58:53.274167 | orchestrator | 2025-03-26 16:58:53 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:58:56.327196 | orchestrator | 2025-03-26 16:58:53 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:58:56.327286 | orchestrator | 2025-03-26 16:58:56 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:58:56.327716 | orchestrator | 2025-03-26 16:58:56 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:58:56.328841 | orchestrator | 2025-03-26 16:58:56 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:58:56.329702 | orchestrator | 2025-03-26 16:58:56 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:58:56.329984 | orchestrator | 2025-03-26 16:58:56 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:58:59.390852 | orchestrator | 2025-03-26 16:58:59 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:58:59.391090 | orchestrator | 2025-03-26 16:58:59 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:58:59.393570 | orchestrator | 2025-03-26 16:58:59 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:58:59.396183 | orchestrator | 2025-03-26 16:58:59 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:59:02.478320 | orchestrator | 2025-03-26 16:58:59 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:59:02.478447 | orchestrator | 2025-03-26 16:59:02 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:59:05.523359 | orchestrator | 2025-03-26 16:59:02 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:59:05.523473 | orchestrator | 2025-03-26 16:59:02 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:59:05.523491 | orchestrator | 2025-03-26 16:59:02 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:59:05.523507 | orchestrator | 2025-03-26 16:59:02 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:59:05.523541 | orchestrator | 2025-03-26 16:59:05 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:59:05.526006 | orchestrator | 2025-03-26 16:59:05 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:59:05.528801 | orchestrator | 2025-03-26 16:59:05 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:59:05.532389 | orchestrator | 2025-03-26 16:59:05 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:59:08.580630 | orchestrator | 2025-03-26 16:59:05 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:59:08.580761 | orchestrator | 2025-03-26 16:59:08 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:59:08.584295 | orchestrator | 2025-03-26 16:59:08 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:59:11.638333 | orchestrator | 2025-03-26 16:59:08 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:59:11.638428 | orchestrator | 2025-03-26 16:59:08 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:59:11.638440 | orchestrator | 2025-03-26 16:59:08 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:59:11.638464 | orchestrator | 2025-03-26 16:59:11 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:59:11.638909 | orchestrator | 2025-03-26 16:59:11 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:59:11.642088 | orchestrator | 2025-03-26 16:59:11 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:59:14.681436 | orchestrator | 2025-03-26 16:59:11 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state STARTED 2025-03-26 16:59:14.681495 | orchestrator | 2025-03-26 16:59:11 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:59:14.681522 | orchestrator | 2025-03-26 16:59:14 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:59:14.683908 | orchestrator | 2025-03-26 16:59:14 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:59:14.686097 | orchestrator | 2025-03-26 16:59:14 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:59:14.686138 | orchestrator | 2025-03-26 16:59:14 | INFO  | Task 03f164e6-0788-4e22-a162-459940b99fff is in state SUCCESS 2025-03-26 16:59:14.686161 | orchestrator | 2025-03-26 16:59:14.686176 | orchestrator | 2025-03-26 16:59:14.686190 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-26 16:59:14.686205 | orchestrator | 2025-03-26 16:59:14.686291 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-26 16:59:14.686306 | orchestrator | Wednesday 26 March 2025 16:56:22 +0000 (0:00:00.335) 0:00:00.335 ******* 2025-03-26 16:59:14.686320 | orchestrator | ok: [testbed-node-3] 2025-03-26 16:59:14.686336 | orchestrator | ok: [testbed-node-4] 2025-03-26 16:59:14.686473 | orchestrator | ok: [testbed-node-5] 2025-03-26 16:59:14.686490 | orchestrator | ok: [testbed-node-0] 2025-03-26 16:59:14.686504 | orchestrator | ok: [testbed-node-1] 2025-03-26 16:59:14.686518 | orchestrator | ok: [testbed-node-2] 2025-03-26 16:59:14.686532 | orchestrator | 2025-03-26 16:59:14.686546 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-26 16:59:14.686560 | orchestrator | Wednesday 26 March 2025 16:56:23 +0000 (0:00:00.875) 0:00:01.211 ******* 2025-03-26 16:59:14.686574 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-03-26 16:59:14.686588 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-03-26 16:59:14.686602 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-03-26 16:59:14.686616 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-03-26 16:59:14.686632 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-03-26 16:59:14.686649 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-03-26 16:59:14.686664 | orchestrator | 2025-03-26 16:59:14.686679 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-03-26 16:59:14.686694 | orchestrator | 2025-03-26 16:59:14.686709 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-03-26 16:59:14.686725 | orchestrator | Wednesday 26 March 2025 16:56:24 +0000 (0:00:01.639) 0:00:02.850 ******* 2025-03-26 16:59:14.686741 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 16:59:14.686758 | orchestrator | 2025-03-26 16:59:14.686773 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-03-26 16:59:14.686788 | orchestrator | Wednesday 26 March 2025 16:56:28 +0000 (0:00:03.294) 0:00:06.145 ******* 2025-03-26 16:59:14.686805 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.686822 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.686838 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.686853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.686869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.686920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.686937 | orchestrator | 2025-03-26 16:59:14.686953 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-03-26 16:59:14.686968 | orchestrator | Wednesday 26 March 2025 16:56:30 +0000 (0:00:02.270) 0:00:08.416 ******* 2025-03-26 16:59:14.687026 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.687041 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.687055 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.687069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.687083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.687097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.687112 | orchestrator | 2025-03-26 16:59:14.687126 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-03-26 16:59:14.687140 | orchestrator | Wednesday 26 March 2025 16:56:32 +0000 (0:00:02.345) 0:00:10.761 ******* 2025-03-26 16:59:14.687154 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.687176 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.687206 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.687221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.687236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.687249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.687264 | orchestrator | 2025-03-26 16:59:14.687278 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-03-26 16:59:14.687292 | orchestrator | Wednesday 26 March 2025 16:56:34 +0000 (0:00:01.378) 0:00:12.139 ******* 2025-03-26 16:59:14.687306 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.687320 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.687334 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.687355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.687370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.687395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.687410 | orchestrator | 2025-03-26 16:59:14.687424 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-03-26 16:59:14.687438 | orchestrator | Wednesday 26 March 2025 16:56:36 +0000 (0:00:02.570) 0:00:14.709 ******* 2025-03-26 16:59:14.687452 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.687466 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.687480 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.687494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.687508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.687530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.687544 | orchestrator | 2025-03-26 16:59:14.687558 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-03-26 16:59:14.687572 | orchestrator | Wednesday 26 March 2025 16:56:38 +0000 (0:00:01.723) 0:00:16.433 ******* 2025-03-26 16:59:14.687586 | orchestrator | changed: [testbed-node-0] 2025-03-26 16:59:14.687601 | orchestrator | changed: [testbed-node-5] 2025-03-26 16:59:14.687615 | orchestrator | changed: [testbed-node-4] 2025-03-26 16:59:14.687629 | orchestrator | changed: [testbed-node-1] 2025-03-26 16:59:14.687643 | orchestrator | changed: [testbed-node-3] 2025-03-26 16:59:14.687657 | orchestrator | changed: [testbed-node-2] 2025-03-26 16:59:14.687670 | orchestrator | 2025-03-26 16:59:14.687685 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-03-26 16:59:14.687698 | orchestrator | Wednesday 26 March 2025 16:56:41 +0000 (0:00:03.142) 0:00:19.575 ******* 2025-03-26 16:59:14.687712 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-03-26 16:59:14.687726 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-03-26 16:59:14.687741 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-03-26 16:59:14.687760 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-03-26 16:59:14.687774 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-03-26 16:59:14.687788 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-03-26 16:59:14.687802 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-03-26 16:59:14.687816 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-03-26 16:59:14.687829 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-03-26 16:59:14.687844 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-03-26 16:59:14.687859 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-03-26 16:59:14.687873 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-03-26 16:59:14.687887 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-03-26 16:59:14.687907 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-03-26 16:59:14.687921 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-03-26 16:59:14.687935 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-03-26 16:59:14.687950 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-03-26 16:59:14.687965 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-03-26 16:59:14.688008 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-03-26 16:59:14.688023 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-03-26 16:59:14.688037 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-03-26 16:59:14.688051 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-03-26 16:59:14.688065 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-03-26 16:59:14.688084 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-03-26 16:59:14.688099 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-03-26 16:59:14.688112 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-03-26 16:59:14.688126 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-03-26 16:59:14.688140 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-03-26 16:59:14.688154 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-03-26 16:59:14.688167 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-03-26 16:59:14.688181 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-03-26 16:59:14.688195 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-03-26 16:59:14.688209 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-03-26 16:59:14.688223 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-03-26 16:59:14.688237 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-03-26 16:59:14.688251 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-03-26 16:59:14.688265 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-03-26 16:59:14.688278 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-03-26 16:59:14.688293 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-03-26 16:59:14.688306 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-03-26 16:59:14.688326 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-03-26 16:59:14.688341 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-03-26 16:59:14.688355 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-03-26 16:59:14.688369 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-03-26 16:59:14.688383 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-03-26 16:59:14.688397 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-03-26 16:59:14.688411 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-03-26 16:59:14.688432 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-03-26 16:59:14.688446 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-03-26 16:59:14.688460 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-03-26 16:59:14.688473 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-03-26 16:59:14.688487 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-03-26 16:59:14.688502 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-03-26 16:59:14.688516 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-03-26 16:59:14.688530 | orchestrator | 2025-03-26 16:59:14.688544 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-03-26 16:59:14.688558 | orchestrator | Wednesday 26 March 2025 16:57:05 +0000 (0:00:23.801) 0:00:43.377 ******* 2025-03-26 16:59:14.688572 | orchestrator | 2025-03-26 16:59:14.688586 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-03-26 16:59:14.688600 | orchestrator | Wednesday 26 March 2025 16:57:05 +0000 (0:00:00.157) 0:00:43.534 ******* 2025-03-26 16:59:14.688614 | orchestrator | 2025-03-26 16:59:14.688627 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-03-26 16:59:14.688641 | orchestrator | Wednesday 26 March 2025 16:57:06 +0000 (0:00:00.454) 0:00:43.989 ******* 2025-03-26 16:59:14.688655 | orchestrator | 2025-03-26 16:59:14.688669 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-03-26 16:59:14.688682 | orchestrator | Wednesday 26 March 2025 16:57:06 +0000 (0:00:00.083) 0:00:44.073 ******* 2025-03-26 16:59:14.688696 | orchestrator | 2025-03-26 16:59:14.688710 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-03-26 16:59:14.688724 | orchestrator | Wednesday 26 March 2025 16:57:06 +0000 (0:00:00.086) 0:00:44.159 ******* 2025-03-26 16:59:14.688737 | orchestrator | 2025-03-26 16:59:14.688751 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-03-26 16:59:14.688765 | orchestrator | Wednesday 26 March 2025 16:57:06 +0000 (0:00:00.106) 0:00:44.265 ******* 2025-03-26 16:59:14.688779 | orchestrator | 2025-03-26 16:59:14.688793 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-03-26 16:59:14.688806 | orchestrator | Wednesday 26 March 2025 16:57:06 +0000 (0:00:00.323) 0:00:44.589 ******* 2025-03-26 16:59:14.688820 | orchestrator | ok: [testbed-node-3] 2025-03-26 16:59:14.688834 | orchestrator | ok: [testbed-node-5] 2025-03-26 16:59:14.688848 | orchestrator | ok: [testbed-node-0] 2025-03-26 16:59:14.688862 | orchestrator | ok: [testbed-node-4] 2025-03-26 16:59:14.688876 | orchestrator | ok: [testbed-node-1] 2025-03-26 16:59:14.688889 | orchestrator | ok: [testbed-node-2] 2025-03-26 16:59:14.688903 | orchestrator | 2025-03-26 16:59:14.688917 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-03-26 16:59:14.688931 | orchestrator | Wednesday 26 March 2025 16:57:09 +0000 (0:00:02.295) 0:00:46.884 ******* 2025-03-26 16:59:14.688944 | orchestrator | changed: [testbed-node-0] 2025-03-26 16:59:14.688958 | orchestrator | changed: [testbed-node-3] 2025-03-26 16:59:14.688972 | orchestrator | changed: [testbed-node-1] 2025-03-26 16:59:14.689033 | orchestrator | changed: [testbed-node-2] 2025-03-26 16:59:14.689047 | orchestrator | changed: [testbed-node-4] 2025-03-26 16:59:14.689061 | orchestrator | changed: [testbed-node-5] 2025-03-26 16:59:14.689111 | orchestrator | 2025-03-26 16:59:14.689127 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-03-26 16:59:14.689141 | orchestrator | 2025-03-26 16:59:14.689155 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-03-26 16:59:14.689182 | orchestrator | Wednesday 26 March 2025 16:57:27 +0000 (0:00:18.262) 0:01:05.146 ******* 2025-03-26 16:59:14.689196 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 16:59:14.689210 | orchestrator | 2025-03-26 16:59:14.689224 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-03-26 16:59:14.689238 | orchestrator | Wednesday 26 March 2025 16:57:28 +0000 (0:00:00.734) 0:01:05.881 ******* 2025-03-26 16:59:14.689252 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 16:59:14.689266 | orchestrator | 2025-03-26 16:59:14.689287 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-03-26 16:59:14.689302 | orchestrator | Wednesday 26 March 2025 16:57:29 +0000 (0:00:01.168) 0:01:07.049 ******* 2025-03-26 16:59:14.689316 | orchestrator | ok: [testbed-node-0] 2025-03-26 16:59:14.689329 | orchestrator | ok: [testbed-node-1] 2025-03-26 16:59:14.689343 | orchestrator | ok: [testbed-node-2] 2025-03-26 16:59:14.689357 | orchestrator | 2025-03-26 16:59:14.689371 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-03-26 16:59:14.689390 | orchestrator | Wednesday 26 March 2025 16:57:31 +0000 (0:00:02.068) 0:01:09.118 ******* 2025-03-26 16:59:14.689404 | orchestrator | ok: [testbed-node-0] 2025-03-26 16:59:14.689418 | orchestrator | ok: [testbed-node-1] 2025-03-26 16:59:14.689432 | orchestrator | ok: [testbed-node-2] 2025-03-26 16:59:14.689446 | orchestrator | 2025-03-26 16:59:14.689460 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-03-26 16:59:14.689474 | orchestrator | Wednesday 26 March 2025 16:57:32 +0000 (0:00:00.862) 0:01:09.980 ******* 2025-03-26 16:59:14.689487 | orchestrator | ok: [testbed-node-0] 2025-03-26 16:59:14.689501 | orchestrator | ok: [testbed-node-1] 2025-03-26 16:59:14.689515 | orchestrator | ok: [testbed-node-2] 2025-03-26 16:59:14.689529 | orchestrator | 2025-03-26 16:59:14.689543 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-03-26 16:59:14.689557 | orchestrator | Wednesday 26 March 2025 16:57:32 +0000 (0:00:00.720) 0:01:10.701 ******* 2025-03-26 16:59:14.689571 | orchestrator | ok: [testbed-node-0] 2025-03-26 16:59:14.689584 | orchestrator | ok: [testbed-node-1] 2025-03-26 16:59:14.689598 | orchestrator | ok: [testbed-node-2] 2025-03-26 16:59:14.689612 | orchestrator | 2025-03-26 16:59:14.689626 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-03-26 16:59:14.689640 | orchestrator | Wednesday 26 March 2025 16:57:33 +0000 (0:00:00.757) 0:01:11.459 ******* 2025-03-26 16:59:14.689653 | orchestrator | ok: [testbed-node-0] 2025-03-26 16:59:14.689667 | orchestrator | ok: [testbed-node-1] 2025-03-26 16:59:14.689681 | orchestrator | ok: [testbed-node-2] 2025-03-26 16:59:14.689695 | orchestrator | 2025-03-26 16:59:14.689709 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-03-26 16:59:14.689723 | orchestrator | Wednesday 26 March 2025 16:57:34 +0000 (0:00:00.402) 0:01:11.861 ******* 2025-03-26 16:59:14.689736 | orchestrator | skipping: [testbed-node-0] 2025-03-26 16:59:14.689750 | orchestrator | skipping: [testbed-node-1] 2025-03-26 16:59:14.689769 | orchestrator | skipping: [testbed-node-2] 2025-03-26 16:59:14.689783 | orchestrator | 2025-03-26 16:59:14.689797 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-03-26 16:59:14.689811 | orchestrator | Wednesday 26 March 2025 16:57:34 +0000 (0:00:00.732) 0:01:12.593 ******* 2025-03-26 16:59:14.689825 | orchestrator | skipping: [testbed-node-0] 2025-03-26 16:59:14.689838 | orchestrator | skipping: [testbed-node-1] 2025-03-26 16:59:14.689852 | orchestrator | skipping: [testbed-node-2] 2025-03-26 16:59:14.689866 | orchestrator | 2025-03-26 16:59:14.689880 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-03-26 16:59:14.689893 | orchestrator | Wednesday 26 March 2025 16:57:35 +0000 (0:00:00.828) 0:01:13.422 ******* 2025-03-26 16:59:14.689907 | orchestrator | skipping: [testbed-node-0] 2025-03-26 16:59:14.689928 | orchestrator | skipping: [testbed-node-1] 2025-03-26 16:59:14.689942 | orchestrator | skipping: [testbed-node-2] 2025-03-26 16:59:14.689956 | orchestrator | 2025-03-26 16:59:14.689969 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-03-26 16:59:14.690000 | orchestrator | Wednesday 26 March 2025 16:57:36 +0000 (0:00:00.646) 0:01:14.069 ******* 2025-03-26 16:59:14.690054 | orchestrator | skipping: [testbed-node-0] 2025-03-26 16:59:14.690071 | orchestrator | skipping: [testbed-node-1] 2025-03-26 16:59:14.690086 | orchestrator | skipping: [testbed-node-2] 2025-03-26 16:59:14.690100 | orchestrator | 2025-03-26 16:59:14.690114 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-03-26 16:59:14.690133 | orchestrator | Wednesday 26 March 2025 16:57:36 +0000 (0:00:00.377) 0:01:14.447 ******* 2025-03-26 16:59:14.690147 | orchestrator | skipping: [testbed-node-0] 2025-03-26 16:59:14.690161 | orchestrator | skipping: [testbed-node-1] 2025-03-26 16:59:14.690175 | orchestrator | skipping: [testbed-node-2] 2025-03-26 16:59:14.690188 | orchestrator | 2025-03-26 16:59:14.690203 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-03-26 16:59:14.690216 | orchestrator | Wednesday 26 March 2025 16:57:37 +0000 (0:00:00.822) 0:01:15.269 ******* 2025-03-26 16:59:14.690230 | orchestrator | skipping: [testbed-node-0] 2025-03-26 16:59:14.690244 | orchestrator | skipping: [testbed-node-1] 2025-03-26 16:59:14.690258 | orchestrator | skipping: [testbed-node-2] 2025-03-26 16:59:14.690272 | orchestrator | 2025-03-26 16:59:14.690286 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-03-26 16:59:14.690300 | orchestrator | Wednesday 26 March 2025 16:57:38 +0000 (0:00:00.610) 0:01:15.880 ******* 2025-03-26 16:59:14.690313 | orchestrator | skipping: [testbed-node-0] 2025-03-26 16:59:14.690327 | orchestrator | skipping: [testbed-node-1] 2025-03-26 16:59:14.690341 | orchestrator | skipping: [testbed-node-2] 2025-03-26 16:59:14.690355 | orchestrator | 2025-03-26 16:59:14.690368 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-03-26 16:59:14.690382 | orchestrator | Wednesday 26 March 2025 16:57:38 +0000 (0:00:00.669) 0:01:16.549 ******* 2025-03-26 16:59:14.690396 | orchestrator | skipping: [testbed-node-0] 2025-03-26 16:59:14.690410 | orchestrator | skipping: [testbed-node-1] 2025-03-26 16:59:14.690424 | orchestrator | skipping: [testbed-node-2] 2025-03-26 16:59:14.690439 | orchestrator | 2025-03-26 16:59:14.690453 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-03-26 16:59:14.690467 | orchestrator | Wednesday 26 March 2025 16:57:39 +0000 (0:00:00.622) 0:01:17.172 ******* 2025-03-26 16:59:14.690480 | orchestrator | skipping: [testbed-node-0] 2025-03-26 16:59:14.690494 | orchestrator | skipping: [testbed-node-1] 2025-03-26 16:59:14.690508 | orchestrator | skipping: [testbed-node-2] 2025-03-26 16:59:14.690522 | orchestrator | 2025-03-26 16:59:14.690536 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-03-26 16:59:14.690549 | orchestrator | Wednesday 26 March 2025 16:57:39 +0000 (0:00:00.508) 0:01:17.680 ******* 2025-03-26 16:59:14.690563 | orchestrator | skipping: [testbed-node-0] 2025-03-26 16:59:14.690577 | orchestrator | skipping: [testbed-node-1] 2025-03-26 16:59:14.690591 | orchestrator | skipping: [testbed-node-2] 2025-03-26 16:59:14.690605 | orchestrator | 2025-03-26 16:59:14.690624 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-03-26 16:59:14.690638 | orchestrator | Wednesday 26 March 2025 16:57:40 +0000 (0:00:00.967) 0:01:18.648 ******* 2025-03-26 16:59:14.690652 | orchestrator | skipping: [testbed-node-0] 2025-03-26 16:59:14.690666 | orchestrator | skipping: [testbed-node-1] 2025-03-26 16:59:14.690680 | orchestrator | skipping: [testbed-node-2] 2025-03-26 16:59:14.690694 | orchestrator | 2025-03-26 16:59:14.690708 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-03-26 16:59:14.690722 | orchestrator | Wednesday 26 March 2025 16:57:41 +0000 (0:00:00.602) 0:01:19.250 ******* 2025-03-26 16:59:14.690735 | orchestrator | skipping: [testbed-node-0] 2025-03-26 16:59:14.690750 | orchestrator | skipping: [testbed-node-1] 2025-03-26 16:59:14.690770 | orchestrator | skipping: [testbed-node-2] 2025-03-26 16:59:14.690784 | orchestrator | 2025-03-26 16:59:14.690798 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-03-26 16:59:14.690817 | orchestrator | Wednesday 26 March 2025 16:57:41 +0000 (0:00:00.362) 0:01:19.613 ******* 2025-03-26 16:59:14.690831 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 16:59:14.690845 | orchestrator | 2025-03-26 16:59:14.690859 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-03-26 16:59:14.690872 | orchestrator | Wednesday 26 March 2025 16:57:43 +0000 (0:00:01.338) 0:01:20.951 ******* 2025-03-26 16:59:14.690886 | orchestrator | ok: [testbed-node-0] 2025-03-26 16:59:14.690900 | orchestrator | ok: [testbed-node-1] 2025-03-26 16:59:14.690914 | orchestrator | ok: [testbed-node-2] 2025-03-26 16:59:14.690928 | orchestrator | 2025-03-26 16:59:14.690941 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-03-26 16:59:14.690955 | orchestrator | Wednesday 26 March 2025 16:57:43 +0000 (0:00:00.683) 0:01:21.635 ******* 2025-03-26 16:59:14.691047 | orchestrator | ok: [testbed-node-0] 2025-03-26 16:59:14.691062 | orchestrator | ok: [testbed-node-1] 2025-03-26 16:59:14.691076 | orchestrator | ok: [testbed-node-2] 2025-03-26 16:59:14.691089 | orchestrator | 2025-03-26 16:59:14.691104 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-03-26 16:59:14.691118 | orchestrator | Wednesday 26 March 2025 16:57:44 +0000 (0:00:00.937) 0:01:22.572 ******* 2025-03-26 16:59:14.691131 | orchestrator | skipping: [testbed-node-0] 2025-03-26 16:59:14.691145 | orchestrator | skipping: [testbed-node-1] 2025-03-26 16:59:14.691159 | orchestrator | skipping: [testbed-node-2] 2025-03-26 16:59:14.691173 | orchestrator | 2025-03-26 16:59:14.691187 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-03-26 16:59:14.691201 | orchestrator | Wednesday 26 March 2025 16:57:45 +0000 (0:00:01.094) 0:01:23.667 ******* 2025-03-26 16:59:14.691215 | orchestrator | skipping: [testbed-node-0] 2025-03-26 16:59:14.691229 | orchestrator | skipping: [testbed-node-1] 2025-03-26 16:59:14.691243 | orchestrator | skipping: [testbed-node-2] 2025-03-26 16:59:14.691256 | orchestrator | 2025-03-26 16:59:14.691270 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-03-26 16:59:14.691284 | orchestrator | Wednesday 26 March 2025 16:57:46 +0000 (0:00:00.639) 0:01:24.306 ******* 2025-03-26 16:59:14.691298 | orchestrator | skipping: [testbed-node-0] 2025-03-26 16:59:14.691311 | orchestrator | skipping: [testbed-node-1] 2025-03-26 16:59:14.691325 | orchestrator | skipping: [testbed-node-2] 2025-03-26 16:59:14.691339 | orchestrator | 2025-03-26 16:59:14.691353 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-03-26 16:59:14.691367 | orchestrator | Wednesday 26 March 2025 16:57:46 +0000 (0:00:00.441) 0:01:24.748 ******* 2025-03-26 16:59:14.691381 | orchestrator | skipping: [testbed-node-0] 2025-03-26 16:59:14.691395 | orchestrator | skipping: [testbed-node-1] 2025-03-26 16:59:14.691415 | orchestrator | skipping: [testbed-node-2] 2025-03-26 16:59:14.691429 | orchestrator | 2025-03-26 16:59:14.691443 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-03-26 16:59:14.691457 | orchestrator | Wednesday 26 March 2025 16:57:47 +0000 (0:00:00.754) 0:01:25.502 ******* 2025-03-26 16:59:14.691470 | orchestrator | skipping: [testbed-node-0] 2025-03-26 16:59:14.691484 | orchestrator | skipping: [testbed-node-1] 2025-03-26 16:59:14.691496 | orchestrator | skipping: [testbed-node-2] 2025-03-26 16:59:14.691508 | orchestrator | 2025-03-26 16:59:14.691521 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-03-26 16:59:14.691533 | orchestrator | Wednesday 26 March 2025 16:57:48 +0000 (0:00:00.684) 0:01:26.186 ******* 2025-03-26 16:59:14.691545 | orchestrator | skipping: [testbed-node-0] 2025-03-26 16:59:14.691558 | orchestrator | skipping: [testbed-node-1] 2025-03-26 16:59:14.691570 | orchestrator | skipping: [testbed-node-2] 2025-03-26 16:59:14.691589 | orchestrator | 2025-03-26 16:59:14.691602 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-03-26 16:59:14.691614 | orchestrator | Wednesday 26 March 2025 16:57:49 +0000 (0:00:01.082) 0:01:27.269 ******* 2025-03-26 16:59:14.691627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.691642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.691663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.691885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.691904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.691917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.691930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.691943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.691955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.691999 | orchestrator | 2025-03-26 16:59:14.692013 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-03-26 16:59:14.692026 | orchestrator | Wednesday 26 March 2025 16:57:51 +0000 (0:00:02.221) 0:01:29.490 ******* 2025-03-26 16:59:14.692038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.692051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.692064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.692083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.692101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.692114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.692127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.692139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.692151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.692170 | orchestrator | 2025-03-26 16:59:14.692183 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-03-26 16:59:14.692195 | orchestrator | Wednesday 26 March 2025 16:57:58 +0000 (0:00:06.991) 0:01:36.481 ******* 2025-03-26 16:59:14.692208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.692224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.692237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.692255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.692269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.692281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.692294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.692307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.692324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.692342 | orchestrator | 2025-03-26 16:59:14.692355 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-03-26 16:59:14.692368 | orchestrator | Wednesday 26 March 2025 16:58:01 +0000 (0:00:03.198) 0:01:39.680 ******* 2025-03-26 16:59:14.692380 | orchestrator | 2025-03-26 16:59:14.692392 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-03-26 16:59:14.692405 | orchestrator | Wednesday 26 March 2025 16:58:01 +0000 (0:00:00.154) 0:01:39.835 ******* 2025-03-26 16:59:14.692417 | orchestrator | 2025-03-26 16:59:14.692430 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-03-26 16:59:14.692443 | orchestrator | Wednesday 26 March 2025 16:58:02 +0000 (0:00:00.145) 0:01:39.980 ******* 2025-03-26 16:59:14.692457 | orchestrator | 2025-03-26 16:59:14.692470 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-03-26 16:59:14.692484 | orchestrator | Wednesday 26 March 2025 16:58:02 +0000 (0:00:00.567) 0:01:40.547 ******* 2025-03-26 16:59:14.692497 | orchestrator | changed: [testbed-node-0] 2025-03-26 16:59:14.692510 | orchestrator | changed: [testbed-node-2] 2025-03-26 16:59:14.692524 | orchestrator | changed: [testbed-node-1] 2025-03-26 16:59:14.692537 | orchestrator | 2025-03-26 16:59:14.692551 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-03-26 16:59:14.692569 | orchestrator | Wednesday 26 March 2025 16:58:10 +0000 (0:00:07.871) 0:01:48.419 ******* 2025-03-26 16:59:14.692583 | orchestrator | changed: [testbed-node-1] 2025-03-26 16:59:14.692596 | orchestrator | changed: [testbed-node-0] 2025-03-26 16:59:14.692610 | orchestrator | changed: [testbed-node-2] 2025-03-26 16:59:14.692623 | orchestrator | 2025-03-26 16:59:14.692637 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-03-26 16:59:14.692650 | orchestrator | Wednesday 26 March 2025 16:58:18 +0000 (0:00:08.216) 0:01:56.636 ******* 2025-03-26 16:59:14.692663 | orchestrator | changed: [testbed-node-1] 2025-03-26 16:59:14.692677 | orchestrator | changed: [testbed-node-0] 2025-03-26 16:59:14.692690 | orchestrator | changed: [testbed-node-2] 2025-03-26 16:59:14.692703 | orchestrator | 2025-03-26 16:59:14.692716 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-03-26 16:59:14.692730 | orchestrator | Wednesday 26 March 2025 16:58:27 +0000 (0:00:08.673) 0:02:05.310 ******* 2025-03-26 16:59:14.692744 | orchestrator | skipping: [testbed-node-0] 2025-03-26 16:59:14.692757 | orchestrator | 2025-03-26 16:59:14.692771 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-03-26 16:59:14.692785 | orchestrator | Wednesday 26 March 2025 16:58:27 +0000 (0:00:00.134) 0:02:05.445 ******* 2025-03-26 16:59:14.692797 | orchestrator | ok: [testbed-node-2] 2025-03-26 16:59:14.692810 | orchestrator | ok: [testbed-node-0] 2025-03-26 16:59:14.692822 | orchestrator | ok: [testbed-node-1] 2025-03-26 16:59:14.692834 | orchestrator | 2025-03-26 16:59:14.692852 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-03-26 16:59:14.692865 | orchestrator | Wednesday 26 March 2025 16:58:28 +0000 (0:00:01.234) 0:02:06.679 ******* 2025-03-26 16:59:14.692878 | orchestrator | skipping: [testbed-node-1] 2025-03-26 16:59:14.692890 | orchestrator | skipping: [testbed-node-2] 2025-03-26 16:59:14.692902 | orchestrator | changed: [testbed-node-0] 2025-03-26 16:59:14.692914 | orchestrator | 2025-03-26 16:59:14.692927 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-03-26 16:59:14.692939 | orchestrator | Wednesday 26 March 2025 16:58:29 +0000 (0:00:00.710) 0:02:07.389 ******* 2025-03-26 16:59:14.692951 | orchestrator | ok: [testbed-node-0] 2025-03-26 16:59:14.692963 | orchestrator | ok: [testbed-node-1] 2025-03-26 16:59:14.692990 | orchestrator | ok: [testbed-node-2] 2025-03-26 16:59:14.693004 | orchestrator | 2025-03-26 16:59:14.693016 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-03-26 16:59:14.693039 | orchestrator | Wednesday 26 March 2025 16:58:30 +0000 (0:00:01.234) 0:02:08.623 ******* 2025-03-26 16:59:14.693052 | orchestrator | skipping: [testbed-node-1] 2025-03-26 16:59:14.693064 | orchestrator | skipping: [testbed-node-2] 2025-03-26 16:59:14.693076 | orchestrator | changed: [testbed-node-0] 2025-03-26 16:59:14.693088 | orchestrator | 2025-03-26 16:59:14.693101 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-03-26 16:59:14.693113 | orchestrator | Wednesday 26 March 2025 16:58:31 +0000 (0:00:00.801) 0:02:09.424 ******* 2025-03-26 16:59:14.693125 | orchestrator | ok: [testbed-node-2] 2025-03-26 16:59:14.693138 | orchestrator | ok: [testbed-node-1] 2025-03-26 16:59:14.693150 | orchestrator | ok: [testbed-node-0] 2025-03-26 16:59:14.693162 | orchestrator | 2025-03-26 16:59:14.693174 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-03-26 16:59:14.693186 | orchestrator | Wednesday 26 March 2025 16:58:32 +0000 (0:00:01.342) 0:02:10.767 ******* 2025-03-26 16:59:14.693199 | orchestrator | ok: [testbed-node-0] 2025-03-26 16:59:14.693211 | orchestrator | ok: [testbed-node-1] 2025-03-26 16:59:14.693223 | orchestrator | ok: [testbed-node-2] 2025-03-26 16:59:14.693235 | orchestrator | 2025-03-26 16:59:14.693248 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-03-26 16:59:14.693260 | orchestrator | Wednesday 26 March 2025 16:58:33 +0000 (0:00:00.935) 0:02:11.703 ******* 2025-03-26 16:59:14.693272 | orchestrator | ok: [testbed-node-0] 2025-03-26 16:59:14.693284 | orchestrator | ok: [testbed-node-1] 2025-03-26 16:59:14.693296 | orchestrator | ok: [testbed-node-2] 2025-03-26 16:59:14.693308 | orchestrator | 2025-03-26 16:59:14.693321 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-03-26 16:59:14.693333 | orchestrator | Wednesday 26 March 2025 16:58:34 +0000 (0:00:00.668) 0:02:12.371 ******* 2025-03-26 16:59:14.693346 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.693358 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.693371 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.693384 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.693397 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.693409 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.693434 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.693447 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.693460 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.693472 | orchestrator | 2025-03-26 16:59:14.693484 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-03-26 16:59:14.693497 | orchestrator | Wednesday 26 March 2025 16:58:36 +0000 (0:00:02.122) 0:02:14.493 ******* 2025-03-26 16:59:14.693509 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.693522 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.693535 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.693552 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.693564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.693583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.693603 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.693616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.693628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.693641 | orchestrator | 2025-03-26 16:59:14.693653 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-03-26 16:59:14.693666 | orchestrator | Wednesday 26 March 2025 16:58:41 +0000 (0:00:04.658) 0:02:19.152 ******* 2025-03-26 16:59:14.693678 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.693690 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.693703 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.693716 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.693733 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.693751 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.693768 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.693787 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.693800 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-03-26 16:59:14.693812 | orchestrator | 2025-03-26 16:59:14.693825 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-03-26 16:59:14.693837 | orchestrator | Wednesday 26 March 2025 16:58:44 +0000 (0:00:03.174) 0:02:22.327 ******* 2025-03-26 16:59:14.693849 | orchestrator | 2025-03-26 16:59:14.693862 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-03-26 16:59:14.693874 | orchestrator | Wednesday 26 March 2025 16:58:44 +0000 (0:00:00.235) 0:02:22.562 ******* 2025-03-26 16:59:14.693886 | orchestrator | 2025-03-26 16:59:14.693898 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-03-26 16:59:14.693911 | orchestrator | Wednesday 26 March 2025 16:58:44 +0000 (0:00:00.064) 0:02:22.627 ******* 2025-03-26 16:59:14.693923 | orchestrator | 2025-03-26 16:59:14.693935 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-03-26 16:59:14.693948 | orchestrator | Wednesday 26 March 2025 16:58:44 +0000 (0:00:00.058) 0:02:22.685 ******* 2025-03-26 16:59:14.693960 | orchestrator | changed: [testbed-node-1] 2025-03-26 16:59:14.693972 | orchestrator | changed: [testbed-node-2] 2025-03-26 16:59:14.693998 | orchestrator | 2025-03-26 16:59:14.694011 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-03-26 16:59:14.694051 | orchestrator | Wednesday 26 March 2025 16:58:52 +0000 (0:00:07.268) 0:02:29.953 ******* 2025-03-26 16:59:14.694064 | orchestrator | changed: [testbed-node-1] 2025-03-26 16:59:14.694076 | orchestrator | changed: [testbed-node-2] 2025-03-26 16:59:14.694088 | orchestrator | 2025-03-26 16:59:14.694101 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-03-26 16:59:14.694113 | orchestrator | Wednesday 26 March 2025 16:58:58 +0000 (0:00:06.572) 0:02:36.526 ******* 2025-03-26 16:59:14.694125 | orchestrator | changed: [testbed-node-1] 2025-03-26 16:59:14.694137 | orchestrator | changed: [testbed-node-2] 2025-03-26 16:59:14.694149 | orchestrator | 2025-03-26 16:59:14.694162 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-03-26 16:59:14.694174 | orchestrator | Wednesday 26 March 2025 16:59:05 +0000 (0:00:06.710) 0:02:43.236 ******* 2025-03-26 16:59:14.694186 | orchestrator | skipping: [testbed-node-0] 2025-03-26 16:59:14.694198 | orchestrator | 2025-03-26 16:59:14.694215 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-03-26 16:59:14.694228 | orchestrator | Wednesday 26 March 2025 16:59:05 +0000 (0:00:00.473) 0:02:43.710 ******* 2025-03-26 16:59:14.694240 | orchestrator | ok: [testbed-node-0] 2025-03-26 16:59:14.694252 | orchestrator | ok: [testbed-node-1] 2025-03-26 16:59:14.694264 | orchestrator | ok: [testbed-node-2] 2025-03-26 16:59:14.694277 | orchestrator | 2025-03-26 16:59:14.694289 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-03-26 16:59:14.694301 | orchestrator | Wednesday 26 March 2025 16:59:06 +0000 (0:00:00.940) 0:02:44.650 ******* 2025-03-26 16:59:14.694313 | orchestrator | skipping: [testbed-node-1] 2025-03-26 16:59:14.694326 | orchestrator | changed: [testbed-node-0] 2025-03-26 16:59:14.694338 | orchestrator | skipping: [testbed-node-2] 2025-03-26 16:59:14.694351 | orchestrator | 2025-03-26 16:59:14.694363 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-03-26 16:59:14.694375 | orchestrator | Wednesday 26 March 2025 16:59:07 +0000 (0:00:00.728) 0:02:45.379 ******* 2025-03-26 16:59:14.694388 | orchestrator | ok: [testbed-node-0] 2025-03-26 16:59:14.694407 | orchestrator | ok: [testbed-node-1] 2025-03-26 16:59:14.694421 | orchestrator | ok: [testbed-node-2] 2025-03-26 16:59:14.694433 | orchestrator | 2025-03-26 16:59:14.694446 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-03-26 16:59:14.694458 | orchestrator | Wednesday 26 March 2025 16:59:08 +0000 (0:00:01.149) 0:02:46.529 ******* 2025-03-26 16:59:14.694470 | orchestrator | skipping: [testbed-node-1] 2025-03-26 16:59:14.694482 | orchestrator | skipping: [testbed-node-2] 2025-03-26 16:59:14.694495 | orchestrator | changed: [testbed-node-0] 2025-03-26 16:59:14.694507 | orchestrator | 2025-03-26 16:59:14.694519 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-03-26 16:59:14.694531 | orchestrator | Wednesday 26 March 2025 16:59:09 +0000 (0:00:00.859) 0:02:47.389 ******* 2025-03-26 16:59:14.694544 | orchestrator | ok: [testbed-node-0] 2025-03-26 16:59:14.694556 | orchestrator | ok: [testbed-node-1] 2025-03-26 16:59:14.694568 | orchestrator | ok: [testbed-node-2] 2025-03-26 16:59:14.694580 | orchestrator | 2025-03-26 16:59:14.694593 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-03-26 16:59:14.694605 | orchestrator | Wednesday 26 March 2025 16:59:10 +0000 (0:00:01.026) 0:02:48.416 ******* 2025-03-26 16:59:14.694617 | orchestrator | ok: [testbed-node-0] 2025-03-26 16:59:14.694629 | orchestrator | ok: [testbed-node-1] 2025-03-26 16:59:14.694641 | orchestrator | ok: [testbed-node-2] 2025-03-26 16:59:14.694653 | orchestrator | 2025-03-26 16:59:14.694666 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-26 16:59:14.694678 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-03-26 16:59:14.694691 | orchestrator | testbed-node-1 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-03-26 16:59:14.694709 | orchestrator | testbed-node-2 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-03-26 16:59:17.734068 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-26 16:59:17.734199 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-26 16:59:17.734218 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-03-26 16:59:17.734232 | orchestrator | 2025-03-26 16:59:17.734247 | orchestrator | 2025-03-26 16:59:17.734261 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-26 16:59:17.734277 | orchestrator | Wednesday 26 March 2025 16:59:12 +0000 (0:00:01.458) 0:02:49.874 ******* 2025-03-26 16:59:17.734318 | orchestrator | =============================================================================== 2025-03-26 16:59:17.734333 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 23.80s 2025-03-26 16:59:17.734347 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 18.26s 2025-03-26 16:59:17.734361 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 15.38s 2025-03-26 16:59:17.734375 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 15.14s 2025-03-26 16:59:17.734389 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 14.79s 2025-03-26 16:59:17.734403 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 6.99s 2025-03-26 16:59:17.734417 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.66s 2025-03-26 16:59:17.734431 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 3.29s 2025-03-26 16:59:17.734451 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.20s 2025-03-26 16:59:17.734465 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.17s 2025-03-26 16:59:17.734479 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.14s 2025-03-26 16:59:17.734493 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.57s 2025-03-26 16:59:17.734507 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.35s 2025-03-26 16:59:17.734521 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.30s 2025-03-26 16:59:17.734534 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 2.27s 2025-03-26 16:59:17.734551 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 2.22s 2025-03-26 16:59:17.734566 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 2.12s 2025-03-26 16:59:17.734582 | orchestrator | ovn-db : Checking for any existing OVN DB container volumes ------------- 2.07s 2025-03-26 16:59:17.734597 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.72s 2025-03-26 16:59:17.734612 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.64s 2025-03-26 16:59:17.734627 | orchestrator | 2025-03-26 16:59:14 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:59:17.734658 | orchestrator | 2025-03-26 16:59:17 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:59:17.735690 | orchestrator | 2025-03-26 16:59:17 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:59:17.740939 | orchestrator | 2025-03-26 16:59:17 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:59:20.782829 | orchestrator | 2025-03-26 16:59:17 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:59:20.783832 | orchestrator | 2025-03-26 16:59:20 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:59:23.855408 | orchestrator | 2025-03-26 16:59:20 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:59:23.855464 | orchestrator | 2025-03-26 16:59:20 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:59:23.855476 | orchestrator | 2025-03-26 16:59:20 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:59:23.855493 | orchestrator | 2025-03-26 16:59:23 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:59:23.857890 | orchestrator | 2025-03-26 16:59:23 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:59:23.858891 | orchestrator | 2025-03-26 16:59:23 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:59:23.859201 | orchestrator | 2025-03-26 16:59:23 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:59:26.910257 | orchestrator | 2025-03-26 16:59:26 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:59:26.911532 | orchestrator | 2025-03-26 16:59:26 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:59:26.913382 | orchestrator | 2025-03-26 16:59:26 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:59:26.913530 | orchestrator | 2025-03-26 16:59:26 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:59:29.965635 | orchestrator | 2025-03-26 16:59:29 | INFO  | Task c405e41f-a003-41a3-9916-e93fab5d692f is in state STARTED 2025-03-26 16:59:29.965834 | orchestrator | 2025-03-26 16:59:29 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:59:29.965859 | orchestrator | 2025-03-26 16:59:29 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:59:29.965874 | orchestrator | 2025-03-26 16:59:29 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:59:29.965894 | orchestrator | 2025-03-26 16:59:29 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:59:33.017306 | orchestrator | 2025-03-26 16:59:33 | INFO  | Task c405e41f-a003-41a3-9916-e93fab5d692f is in state STARTED 2025-03-26 16:59:33.018176 | orchestrator | 2025-03-26 16:59:33 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:59:33.018800 | orchestrator | 2025-03-26 16:59:33 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:59:33.022178 | orchestrator | 2025-03-26 16:59:33 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:59:36.077351 | orchestrator | 2025-03-26 16:59:33 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:59:36.077461 | orchestrator | 2025-03-26 16:59:36 | INFO  | Task c405e41f-a003-41a3-9916-e93fab5d692f is in state STARTED 2025-03-26 16:59:36.079778 | orchestrator | 2025-03-26 16:59:36 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:59:36.082839 | orchestrator | 2025-03-26 16:59:36 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:59:36.084341 | orchestrator | 2025-03-26 16:59:36 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:59:39.128468 | orchestrator | 2025-03-26 16:59:36 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:59:39.128563 | orchestrator | 2025-03-26 16:59:39 | INFO  | Task c405e41f-a003-41a3-9916-e93fab5d692f is in state STARTED 2025-03-26 16:59:39.133126 | orchestrator | 2025-03-26 16:59:39 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:59:39.136095 | orchestrator | 2025-03-26 16:59:39 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:59:39.137756 | orchestrator | 2025-03-26 16:59:39 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:59:39.137981 | orchestrator | 2025-03-26 16:59:39 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:59:42.209937 | orchestrator | 2025-03-26 16:59:42 | INFO  | Task c405e41f-a003-41a3-9916-e93fab5d692f is in state STARTED 2025-03-26 16:59:42.210252 | orchestrator | 2025-03-26 16:59:42 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:59:42.212233 | orchestrator | 2025-03-26 16:59:42 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:59:42.215367 | orchestrator | 2025-03-26 16:59:42 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:59:42.216104 | orchestrator | 2025-03-26 16:59:42 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:59:45.270068 | orchestrator | 2025-03-26 16:59:45 | INFO  | Task c405e41f-a003-41a3-9916-e93fab5d692f is in state SUCCESS 2025-03-26 16:59:45.271034 | orchestrator | 2025-03-26 16:59:45 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:59:45.278949 | orchestrator | 2025-03-26 16:59:45 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:59:45.281896 | orchestrator | 2025-03-26 16:59:45 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:59:45.284930 | orchestrator | 2025-03-26 16:59:45 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:59:48.328928 | orchestrator | 2025-03-26 16:59:48 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:59:48.330433 | orchestrator | 2025-03-26 16:59:48 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:59:48.332806 | orchestrator | 2025-03-26 16:59:48 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:59:51.406127 | orchestrator | 2025-03-26 16:59:48 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:59:51.406215 | orchestrator | 2025-03-26 16:59:51 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:59:51.408729 | orchestrator | 2025-03-26 16:59:51 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:59:51.411975 | orchestrator | 2025-03-26 16:59:51 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:59:51.413064 | orchestrator | 2025-03-26 16:59:51 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:59:54.453105 | orchestrator | 2025-03-26 16:59:54 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:59:54.454767 | orchestrator | 2025-03-26 16:59:54 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:59:54.460198 | orchestrator | 2025-03-26 16:59:54 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 16:59:57.501391 | orchestrator | 2025-03-26 16:59:54 | INFO  | Wait 1 second(s) until the next check 2025-03-26 16:59:57.501519 | orchestrator | 2025-03-26 16:59:57 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 16:59:57.503329 | orchestrator | 2025-03-26 16:59:57 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 16:59:57.504210 | orchestrator | 2025-03-26 16:59:57 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:00:00.544565 | orchestrator | 2025-03-26 16:59:57 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:00:00.544682 | orchestrator | 2025-03-26 17:00:00 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:00:00.547617 | orchestrator | 2025-03-26 17:00:00 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:00:00.552539 | orchestrator | 2025-03-26 17:00:00 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:00:00.555226 | orchestrator | 2025-03-26 17:00:00 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:00:03.608469 | orchestrator | 2025-03-26 17:00:03 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:00:03.608986 | orchestrator | 2025-03-26 17:00:03 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:00:03.609653 | orchestrator | 2025-03-26 17:00:03 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:00:06.664368 | orchestrator | 2025-03-26 17:00:03 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:00:06.664515 | orchestrator | 2025-03-26 17:00:06 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:00:06.665721 | orchestrator | 2025-03-26 17:00:06 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:00:06.670144 | orchestrator | 2025-03-26 17:00:06 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:00:09.719582 | orchestrator | 2025-03-26 17:00:06 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:00:09.719710 | orchestrator | 2025-03-26 17:00:09 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:00:09.721345 | orchestrator | 2025-03-26 17:00:09 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:00:09.723361 | orchestrator | 2025-03-26 17:00:09 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:00:09.723801 | orchestrator | 2025-03-26 17:00:09 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:00:12.769236 | orchestrator | 2025-03-26 17:00:12 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:00:12.771500 | orchestrator | 2025-03-26 17:00:12 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:00:12.772602 | orchestrator | 2025-03-26 17:00:12 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:00:12.772679 | orchestrator | 2025-03-26 17:00:12 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:00:15.820618 | orchestrator | 2025-03-26 17:00:15 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:00:15.823733 | orchestrator | 2025-03-26 17:00:15 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:00:15.825343 | orchestrator | 2025-03-26 17:00:15 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:00:15.826419 | orchestrator | 2025-03-26 17:00:15 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:00:18.893376 | orchestrator | 2025-03-26 17:00:18 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:00:18.895113 | orchestrator | 2025-03-26 17:00:18 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:00:18.895841 | orchestrator | 2025-03-26 17:00:18 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:00:21.942642 | orchestrator | 2025-03-26 17:00:18 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:00:21.942686 | orchestrator | 2025-03-26 17:00:21 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:00:21.943956 | orchestrator | 2025-03-26 17:00:21 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:00:21.945472 | orchestrator | 2025-03-26 17:00:21 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:00:21.945858 | orchestrator | 2025-03-26 17:00:21 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:00:25.012794 | orchestrator | 2025-03-26 17:00:25 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:00:25.018409 | orchestrator | 2025-03-26 17:00:25 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:00:25.018472 | orchestrator | 2025-03-26 17:00:25 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:00:28.062630 | orchestrator | 2025-03-26 17:00:25 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:00:28.062734 | orchestrator | 2025-03-26 17:00:28 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:00:28.063948 | orchestrator | 2025-03-26 17:00:28 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:00:28.063985 | orchestrator | 2025-03-26 17:00:28 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:00:28.064145 | orchestrator | 2025-03-26 17:00:28 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:00:31.122337 | orchestrator | 2025-03-26 17:00:31 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:00:31.124540 | orchestrator | 2025-03-26 17:00:31 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:00:31.125500 | orchestrator | 2025-03-26 17:00:31 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:00:34.183710 | orchestrator | 2025-03-26 17:00:31 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:00:34.183817 | orchestrator | 2025-03-26 17:00:34 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:00:34.186250 | orchestrator | 2025-03-26 17:00:34 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:00:34.188073 | orchestrator | 2025-03-26 17:00:34 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:00:37.249165 | orchestrator | 2025-03-26 17:00:34 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:00:37.249317 | orchestrator | 2025-03-26 17:00:37 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:00:37.250681 | orchestrator | 2025-03-26 17:00:37 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:00:37.252385 | orchestrator | 2025-03-26 17:00:37 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:00:40.305642 | orchestrator | 2025-03-26 17:00:37 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:00:40.305736 | orchestrator | 2025-03-26 17:00:40 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:00:40.311720 | orchestrator | 2025-03-26 17:00:40 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:00:40.312983 | orchestrator | 2025-03-26 17:00:40 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:00:43.376073 | orchestrator | 2025-03-26 17:00:40 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:00:43.376210 | orchestrator | 2025-03-26 17:00:43 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:00:43.376734 | orchestrator | 2025-03-26 17:00:43 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:00:43.378578 | orchestrator | 2025-03-26 17:00:43 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:00:46.435948 | orchestrator | 2025-03-26 17:00:43 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:00:46.436138 | orchestrator | 2025-03-26 17:00:46 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:00:46.437797 | orchestrator | 2025-03-26 17:00:46 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:00:46.439128 | orchestrator | 2025-03-26 17:00:46 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:00:49.478674 | orchestrator | 2025-03-26 17:00:46 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:00:49.478825 | orchestrator | 2025-03-26 17:00:49 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:00:49.479986 | orchestrator | 2025-03-26 17:00:49 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:00:49.480768 | orchestrator | 2025-03-26 17:00:49 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:00:52.527002 | orchestrator | 2025-03-26 17:00:49 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:00:52.527245 | orchestrator | 2025-03-26 17:00:52 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:00:52.527771 | orchestrator | 2025-03-26 17:00:52 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:00:52.531833 | orchestrator | 2025-03-26 17:00:52 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:00:55.579860 | orchestrator | 2025-03-26 17:00:52 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:00:55.579953 | orchestrator | 2025-03-26 17:00:55 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:00:55.581509 | orchestrator | 2025-03-26 17:00:55 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:00:55.583263 | orchestrator | 2025-03-26 17:00:55 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:00:58.641732 | orchestrator | 2025-03-26 17:00:55 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:00:58.641831 | orchestrator | 2025-03-26 17:00:58 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:00:58.642765 | orchestrator | 2025-03-26 17:00:58 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:00:58.645150 | orchestrator | 2025-03-26 17:00:58 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:01:01.693633 | orchestrator | 2025-03-26 17:00:58 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:01:01.835821 | orchestrator | 2025-03-26 17:01:01 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:01:04.742737 | orchestrator | 2025-03-26 17:01:01 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:01:04.742875 | orchestrator | 2025-03-26 17:01:01 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:01:04.742991 | orchestrator | 2025-03-26 17:01:01 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:01:04.743045 | orchestrator | 2025-03-26 17:01:04 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:01:04.753872 | orchestrator | 2025-03-26 17:01:04 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:01:04.754633 | orchestrator | 2025-03-26 17:01:04 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:01:04.754992 | orchestrator | 2025-03-26 17:01:04 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:01:07.807708 | orchestrator | 2025-03-26 17:01:07 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:01:07.810833 | orchestrator | 2025-03-26 17:01:07 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:01:07.814664 | orchestrator | 2025-03-26 17:01:07 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:01:07.814895 | orchestrator | 2025-03-26 17:01:07 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:01:10.860985 | orchestrator | 2025-03-26 17:01:10 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:01:10.862522 | orchestrator | 2025-03-26 17:01:10 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:01:10.864262 | orchestrator | 2025-03-26 17:01:10 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:01:10.864420 | orchestrator | 2025-03-26 17:01:10 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:01:13.916778 | orchestrator | 2025-03-26 17:01:13 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:01:13.917395 | orchestrator | 2025-03-26 17:01:13 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:01:13.918394 | orchestrator | 2025-03-26 17:01:13 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:01:16.967313 | orchestrator | 2025-03-26 17:01:13 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:01:16.968378 | orchestrator | 2025-03-26 17:01:16 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:01:16.968804 | orchestrator | 2025-03-26 17:01:16 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:01:16.968816 | orchestrator | 2025-03-26 17:01:16 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:01:16.968901 | orchestrator | 2025-03-26 17:01:16 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:01:20.022317 | orchestrator | 2025-03-26 17:01:20 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:01:20.024356 | orchestrator | 2025-03-26 17:01:20 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:01:20.028301 | orchestrator | 2025-03-26 17:01:20 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:01:20.028514 | orchestrator | 2025-03-26 17:01:20 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:01:23.080329 | orchestrator | 2025-03-26 17:01:23 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:01:23.081029 | orchestrator | 2025-03-26 17:01:23 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:01:23.081739 | orchestrator | 2025-03-26 17:01:23 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:01:23.083140 | orchestrator | 2025-03-26 17:01:23 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:01:26.144651 | orchestrator | 2025-03-26 17:01:26 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:01:26.146074 | orchestrator | 2025-03-26 17:01:26 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:01:26.147160 | orchestrator | 2025-03-26 17:01:26 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:01:29.216495 | orchestrator | 2025-03-26 17:01:26 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:01:29.216659 | orchestrator | 2025-03-26 17:01:29 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:01:29.217853 | orchestrator | 2025-03-26 17:01:29 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:01:32.275026 | orchestrator | 2025-03-26 17:01:29 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:01:32.275147 | orchestrator | 2025-03-26 17:01:29 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:01:32.275184 | orchestrator | 2025-03-26 17:01:32 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:01:32.276142 | orchestrator | 2025-03-26 17:01:32 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:01:32.276916 | orchestrator | 2025-03-26 17:01:32 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:01:32.277057 | orchestrator | 2025-03-26 17:01:32 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:01:35.335916 | orchestrator | 2025-03-26 17:01:35 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:01:35.338448 | orchestrator | 2025-03-26 17:01:35 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:01:35.338824 | orchestrator | 2025-03-26 17:01:35 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:01:38.396413 | orchestrator | 2025-03-26 17:01:35 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:01:38.396552 | orchestrator | 2025-03-26 17:01:38 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:01:38.399731 | orchestrator | 2025-03-26 17:01:38 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:01:38.401656 | orchestrator | 2025-03-26 17:01:38 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:01:38.401891 | orchestrator | 2025-03-26 17:01:38 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:01:41.446219 | orchestrator | 2025-03-26 17:01:41 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:01:41.446439 | orchestrator | 2025-03-26 17:01:41 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:01:41.447024 | orchestrator | 2025-03-26 17:01:41 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:01:41.447105 | orchestrator | 2025-03-26 17:01:41 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:01:44.496015 | orchestrator | 2025-03-26 17:01:44 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:01:44.497114 | orchestrator | 2025-03-26 17:01:44 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:01:44.498367 | orchestrator | 2025-03-26 17:01:44 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:01:47.547432 | orchestrator | 2025-03-26 17:01:44 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:01:47.547589 | orchestrator | 2025-03-26 17:01:47 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:01:47.548811 | orchestrator | 2025-03-26 17:01:47 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:01:47.550471 | orchestrator | 2025-03-26 17:01:47 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:01:50.599320 | orchestrator | 2025-03-26 17:01:47 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:01:50.599494 | orchestrator | 2025-03-26 17:01:50 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:01:50.600338 | orchestrator | 2025-03-26 17:01:50 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:01:50.601862 | orchestrator | 2025-03-26 17:01:50 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:01:53.650185 | orchestrator | 2025-03-26 17:01:50 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:01:53.650290 | orchestrator | 2025-03-26 17:01:53 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:01:53.650374 | orchestrator | 2025-03-26 17:01:53 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:01:53.654594 | orchestrator | 2025-03-26 17:01:53 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:01:56.713610 | orchestrator | 2025-03-26 17:01:53 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:01:56.713742 | orchestrator | 2025-03-26 17:01:56 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:01:56.716253 | orchestrator | 2025-03-26 17:01:56 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:01:56.716291 | orchestrator | 2025-03-26 17:01:56 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:01:59.768354 | orchestrator | 2025-03-26 17:01:56 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:01:59.768455 | orchestrator | 2025-03-26 17:01:59 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:01:59.770334 | orchestrator | 2025-03-26 17:01:59 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:01:59.770357 | orchestrator | 2025-03-26 17:01:59 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:02:02.816561 | orchestrator | 2025-03-26 17:01:59 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:02:02.816687 | orchestrator | 2025-03-26 17:02:02 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:02:02.818479 | orchestrator | 2025-03-26 17:02:02 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:02:02.820462 | orchestrator | 2025-03-26 17:02:02 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:02:05.878282 | orchestrator | 2025-03-26 17:02:02 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:02:05.878367 | orchestrator | 2025-03-26 17:02:05 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:02:05.879733 | orchestrator | 2025-03-26 17:02:05 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:02:05.881004 | orchestrator | 2025-03-26 17:02:05 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:02:05.881283 | orchestrator | 2025-03-26 17:02:05 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:02:08.938678 | orchestrator | 2025-03-26 17:02:08 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:02:08.939716 | orchestrator | 2025-03-26 17:02:08 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:02:08.940712 | orchestrator | 2025-03-26 17:02:08 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:02:11.999111 | orchestrator | 2025-03-26 17:02:08 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:02:11.999294 | orchestrator | 2025-03-26 17:02:11 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:02:11.999385 | orchestrator | 2025-03-26 17:02:11 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:02:12.001594 | orchestrator | 2025-03-26 17:02:12 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:02:15.057135 | orchestrator | 2025-03-26 17:02:12 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:02:15.057269 | orchestrator | 2025-03-26 17:02:15 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:02:15.059994 | orchestrator | 2025-03-26 17:02:15 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:02:15.064937 | orchestrator | 2025-03-26 17:02:15 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:02:18.116589 | orchestrator | 2025-03-26 17:02:15 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:02:18.116721 | orchestrator | 2025-03-26 17:02:18 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:02:18.117965 | orchestrator | 2025-03-26 17:02:18 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:02:18.119283 | orchestrator | 2025-03-26 17:02:18 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:02:18.119381 | orchestrator | 2025-03-26 17:02:18 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:02:21.169946 | orchestrator | 2025-03-26 17:02:21 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:02:21.171887 | orchestrator | 2025-03-26 17:02:21 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:02:21.173728 | orchestrator | 2025-03-26 17:02:21 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:02:24.219565 | orchestrator | 2025-03-26 17:02:21 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:02:24.219712 | orchestrator | 2025-03-26 17:02:24 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:02:24.220819 | orchestrator | 2025-03-26 17:02:24 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:02:24.221401 | orchestrator | 2025-03-26 17:02:24 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:02:27.270130 | orchestrator | 2025-03-26 17:02:24 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:02:27.270304 | orchestrator | 2025-03-26 17:02:27 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:02:27.270393 | orchestrator | 2025-03-26 17:02:27 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:02:27.273738 | orchestrator | 2025-03-26 17:02:27 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:02:30.325162 | orchestrator | 2025-03-26 17:02:27 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:02:30.325297 | orchestrator | 2025-03-26 17:02:30 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:02:30.327183 | orchestrator | 2025-03-26 17:02:30 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:02:30.329918 | orchestrator | 2025-03-26 17:02:30 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:02:33.375117 | orchestrator | 2025-03-26 17:02:30 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:02:33.375254 | orchestrator | 2025-03-26 17:02:33 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:02:33.376144 | orchestrator | 2025-03-26 17:02:33 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:02:33.377686 | orchestrator | 2025-03-26 17:02:33 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:02:33.377868 | orchestrator | 2025-03-26 17:02:33 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:02:36.444773 | orchestrator | 2025-03-26 17:02:36 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:02:36.447134 | orchestrator | 2025-03-26 17:02:36 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:02:36.449096 | orchestrator | 2025-03-26 17:02:36 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:02:36.449628 | orchestrator | 2025-03-26 17:02:36 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:02:39.513593 | orchestrator | 2025-03-26 17:02:39 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:02:39.517305 | orchestrator | 2025-03-26 17:02:39 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:02:39.518251 | orchestrator | 2025-03-26 17:02:39 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:02:42.567784 | orchestrator | 2025-03-26 17:02:39 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:02:42.567978 | orchestrator | 2025-03-26 17:02:42 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:02:42.569083 | orchestrator | 2025-03-26 17:02:42 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:02:42.570747 | orchestrator | 2025-03-26 17:02:42 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:02:45.631497 | orchestrator | 2025-03-26 17:02:42 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:02:45.631549 | orchestrator | 2025-03-26 17:02:45 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:02:45.632467 | orchestrator | 2025-03-26 17:02:45 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:02:45.634122 | orchestrator | 2025-03-26 17:02:45 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:02:48.694436 | orchestrator | 2025-03-26 17:02:45 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:02:48.694566 | orchestrator | 2025-03-26 17:02:48 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:02:48.697240 | orchestrator | 2025-03-26 17:02:48 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:02:48.697676 | orchestrator | 2025-03-26 17:02:48 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:02:48.697791 | orchestrator | 2025-03-26 17:02:48 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:02:51.749507 | orchestrator | 2025-03-26 17:02:51 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:02:51.752103 | orchestrator | 2025-03-26 17:02:51 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:02:51.756605 | orchestrator | 2025-03-26 17:02:51 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:02:54.810644 | orchestrator | 2025-03-26 17:02:51 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:02:54.810774 | orchestrator | 2025-03-26 17:02:54 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:02:54.813538 | orchestrator | 2025-03-26 17:02:54 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:02:57.870101 | orchestrator | 2025-03-26 17:02:54 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:02:57.870226 | orchestrator | 2025-03-26 17:02:54 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:02:57.870262 | orchestrator | 2025-03-26 17:02:57 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:02:57.871265 | orchestrator | 2025-03-26 17:02:57 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:02:57.873131 | orchestrator | 2025-03-26 17:02:57 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:03:00.924646 | orchestrator | 2025-03-26 17:02:57 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:03:00.924797 | orchestrator | 2025-03-26 17:03:00 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:03:00.925595 | orchestrator | 2025-03-26 17:03:00 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:03:00.927371 | orchestrator | 2025-03-26 17:03:00 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:03:03.970060 | orchestrator | 2025-03-26 17:03:00 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:03:03.970183 | orchestrator | 2025-03-26 17:03:03 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:03:03.970245 | orchestrator | 2025-03-26 17:03:03 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:03:03.971398 | orchestrator | 2025-03-26 17:03:03 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:03:07.028511 | orchestrator | 2025-03-26 17:03:03 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:03:07.028613 | orchestrator | 2025-03-26 17:03:07 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:03:07.030806 | orchestrator | 2025-03-26 17:03:07 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:03:07.031701 | orchestrator | 2025-03-26 17:03:07 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:03:07.031992 | orchestrator | 2025-03-26 17:03:07 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:03:10.074669 | orchestrator | 2025-03-26 17:03:10 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:03:10.075986 | orchestrator | 2025-03-26 17:03:10 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:03:10.076327 | orchestrator | 2025-03-26 17:03:10 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:03:13.128425 | orchestrator | 2025-03-26 17:03:10 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:03:13.128549 | orchestrator | 2025-03-26 17:03:13 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:03:13.130384 | orchestrator | 2025-03-26 17:03:13 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:03:13.132012 | orchestrator | 2025-03-26 17:03:13 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:03:13.132218 | orchestrator | 2025-03-26 17:03:13 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:03:16.213675 | orchestrator | 2025-03-26 17:03:16 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:03:16.214467 | orchestrator | 2025-03-26 17:03:16 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:03:16.215269 | orchestrator | 2025-03-26 17:03:16 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:03:16.215346 | orchestrator | 2025-03-26 17:03:16 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:03:19.269897 | orchestrator | 2025-03-26 17:03:19 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:03:19.271798 | orchestrator | 2025-03-26 17:03:19 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:03:19.273438 | orchestrator | 2025-03-26 17:03:19 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:03:19.273465 | orchestrator | 2025-03-26 17:03:19 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:03:22.324820 | orchestrator | 2025-03-26 17:03:22 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:03:22.325383 | orchestrator | 2025-03-26 17:03:22 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:03:22.325492 | orchestrator | 2025-03-26 17:03:22 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:03:25.383282 | orchestrator | 2025-03-26 17:03:22 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:03:25.383354 | orchestrator | 2025-03-26 17:03:25 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:03:25.387581 | orchestrator | 2025-03-26 17:03:25 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:03:25.390481 | orchestrator | 2025-03-26 17:03:25 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:03:28.441050 | orchestrator | 2025-03-26 17:03:25 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:03:28.441173 | orchestrator | 2025-03-26 17:03:28 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:03:28.441262 | orchestrator | 2025-03-26 17:03:28 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:03:28.442462 | orchestrator | 2025-03-26 17:03:28 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:03:28.444517 | orchestrator | 2025-03-26 17:03:28 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:03:31.507603 | orchestrator | 2025-03-26 17:03:31 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:03:31.508773 | orchestrator | 2025-03-26 17:03:31 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:03:31.510168 | orchestrator | 2025-03-26 17:03:31 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:03:31.510275 | orchestrator | 2025-03-26 17:03:31 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:03:34.552496 | orchestrator | 2025-03-26 17:03:34 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:03:34.553980 | orchestrator | 2025-03-26 17:03:34 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:03:34.556535 | orchestrator | 2025-03-26 17:03:34 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:03:37.596400 | orchestrator | 2025-03-26 17:03:34 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:03:37.597219 | orchestrator | 2025-03-26 17:03:37 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:03:40.636635 | orchestrator | 2025-03-26 17:03:37 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:03:40.636777 | orchestrator | 2025-03-26 17:03:37 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:03:40.636798 | orchestrator | 2025-03-26 17:03:37 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:03:40.636832 | orchestrator | 2025-03-26 17:03:40 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:03:40.639479 | orchestrator | 2025-03-26 17:03:40 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:03:43.684359 | orchestrator | 2025-03-26 17:03:40 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:03:44.167783 | orchestrator | 2025-03-26 17:03:40 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:03:44.167922 | orchestrator | 2025-03-26 17:03:43 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:03:46.727712 | orchestrator | 2025-03-26 17:03:43 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:03:46.727885 | orchestrator | 2025-03-26 17:03:43 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:03:46.727902 | orchestrator | 2025-03-26 17:03:43 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:03:46.727930 | orchestrator | 2025-03-26 17:03:46 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:03:46.728001 | orchestrator | 2025-03-26 17:03:46 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:03:46.729169 | orchestrator | 2025-03-26 17:03:46 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:03:46.729540 | orchestrator | 2025-03-26 17:03:46 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:03:49.770418 | orchestrator | 2025-03-26 17:03:49 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:03:49.773277 | orchestrator | 2025-03-26 17:03:49 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:03:49.775386 | orchestrator | 2025-03-26 17:03:49 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:03:49.775504 | orchestrator | 2025-03-26 17:03:49 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:03:52.824227 | orchestrator | 2025-03-26 17:03:52 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:03:52.828158 | orchestrator | 2025-03-26 17:03:52 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:03:52.831363 | orchestrator | 2025-03-26 17:03:52 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:03:55.892796 | orchestrator | 2025-03-26 17:03:52 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:03:55.892967 | orchestrator | 2025-03-26 17:03:55 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:03:55.895074 | orchestrator | 2025-03-26 17:03:55 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:03:55.896032 | orchestrator | 2025-03-26 17:03:55 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state STARTED 2025-03-26 17:03:58.950011 | orchestrator | 2025-03-26 17:03:55 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:03:58.950191 | orchestrator | 2025-03-26 17:03:58 | INFO  | Task d65e1e7e-e096-408f-b62d-51dda8e1cc6e is in state STARTED 2025-03-26 17:03:58.952356 | orchestrator | 2025-03-26 17:03:58 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:03:58.952940 | orchestrator | 2025-03-26 17:03:58 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:03:58.960496 | orchestrator | 2025-03-26 17:03:58 | INFO  | Task 239fbd64-e613-4277-abc1-93d6eb8b2be3 is in state SUCCESS 2025-03-26 17:03:58.962407 | orchestrator | 2025-03-26 17:03:58.962444 | orchestrator | None 2025-03-26 17:03:58.962461 | orchestrator | 2025-03-26 17:03:58.962476 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-03-26 17:03:58.962491 | orchestrator | 2025-03-26 17:03:58.962506 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-03-26 17:03:58.962522 | orchestrator | Wednesday 26 March 2025 16:54:34 +0000 (0:00:00.742) 0:00:00.742 ******* 2025-03-26 17:03:58.962537 | orchestrator | ok: [testbed-node-0] 2025-03-26 17:03:58.962553 | orchestrator | ok: [testbed-node-1] 2025-03-26 17:03:58.962568 | orchestrator | ok: [testbed-node-2] 2025-03-26 17:03:58.962612 | orchestrator | 2025-03-26 17:03:58.962629 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-03-26 17:03:58.962669 | orchestrator | Wednesday 26 March 2025 16:54:35 +0000 (0:00:01.036) 0:00:01.778 ******* 2025-03-26 17:03:58.962685 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-03-26 17:03:58.962699 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-03-26 17:03:58.962713 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-03-26 17:03:58.962727 | orchestrator | 2025-03-26 17:03:58.962741 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-03-26 17:03:58.962755 | orchestrator | 2025-03-26 17:03:58.962769 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-03-26 17:03:58.962783 | orchestrator | Wednesday 26 March 2025 16:54:36 +0000 (0:00:00.740) 0:00:02.519 ******* 2025-03-26 17:03:58.962797 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 17:03:58.962811 | orchestrator | 2025-03-26 17:03:58.962824 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-03-26 17:03:58.963011 | orchestrator | Wednesday 26 March 2025 16:54:38 +0000 (0:00:02.356) 0:00:04.876 ******* 2025-03-26 17:03:58.963029 | orchestrator | ok: [testbed-node-0] 2025-03-26 17:03:58.963044 | orchestrator | ok: [testbed-node-2] 2025-03-26 17:03:58.963059 | orchestrator | ok: [testbed-node-1] 2025-03-26 17:03:58.963074 | orchestrator | 2025-03-26 17:03:58.963090 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-03-26 17:03:58.963105 | orchestrator | Wednesday 26 March 2025 16:54:41 +0000 (0:00:02.158) 0:00:07.034 ******* 2025-03-26 17:03:58.963120 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 17:03:58.963135 | orchestrator | 2025-03-26 17:03:58.963151 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-03-26 17:03:58.963165 | orchestrator | Wednesday 26 March 2025 16:54:42 +0000 (0:00:01.588) 0:00:08.623 ******* 2025-03-26 17:03:58.963180 | orchestrator | ok: [testbed-node-0] 2025-03-26 17:03:58.963195 | orchestrator | ok: [testbed-node-1] 2025-03-26 17:03:58.963210 | orchestrator | ok: [testbed-node-2] 2025-03-26 17:03:58.963225 | orchestrator | 2025-03-26 17:03:58.963240 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-03-26 17:03:58.963254 | orchestrator | Wednesday 26 March 2025 16:54:44 +0000 (0:00:01.822) 0:00:10.446 ******* 2025-03-26 17:03:58.963268 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-03-26 17:03:58.963282 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-03-26 17:03:58.963296 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-03-26 17:03:58.963310 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-03-26 17:03:58.963354 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-03-26 17:03:58.963370 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-03-26 17:03:58.963384 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-03-26 17:03:58.963399 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-03-26 17:03:58.963412 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-03-26 17:03:58.963440 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-03-26 17:03:58.963455 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-03-26 17:03:58.963468 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-03-26 17:03:58.963482 | orchestrator | 2025-03-26 17:03:58.963496 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-03-26 17:03:58.963520 | orchestrator | Wednesday 26 March 2025 16:54:49 +0000 (0:00:05.195) 0:00:15.643 ******* 2025-03-26 17:03:58.963534 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-03-26 17:03:58.963548 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-03-26 17:03:58.963562 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-03-26 17:03:58.963575 | orchestrator | 2025-03-26 17:03:58.963589 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-03-26 17:03:58.963603 | orchestrator | Wednesday 26 March 2025 16:54:51 +0000 (0:00:01.821) 0:00:17.464 ******* 2025-03-26 17:03:58.963617 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-03-26 17:03:58.963636 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-03-26 17:03:58.963718 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-03-26 17:03:58.963733 | orchestrator | 2025-03-26 17:03:58.963747 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-03-26 17:03:58.963760 | orchestrator | Wednesday 26 March 2025 16:54:54 +0000 (0:00:02.503) 0:00:19.968 ******* 2025-03-26 17:03:58.963774 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-03-26 17:03:58.963788 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.963814 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-03-26 17:03:58.963850 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.963865 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-03-26 17:03:58.963879 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.963893 | orchestrator | 2025-03-26 17:03:58.963907 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-03-26 17:03:58.963921 | orchestrator | Wednesday 26 March 2025 16:54:55 +0000 (0:00:01.865) 0:00:21.833 ******* 2025-03-26 17:03:58.963937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-03-26 17:03:58.963959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-03-26 17:03:58.963973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-26 17:03:58.963988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-03-26 17:03:58.964012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-26 17:03:58.964027 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-26 17:03:58.964051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e5d199c6a324e3e82f2e70b3218178f31726fb82', '__omit_place_holder__e5d199c6a324e3e82f2e70b3218178f31726fb82'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-26 17:03:58.964066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-26 17:03:58.964081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-26 17:03:58.964096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e5d199c6a324e3e82f2e70b3218178f31726fb82', '__omit_place_holder__e5d199c6a324e3e82f2e70b3218178f31726fb82'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-26 17:03:58.964117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-26 17:03:58.964131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e5d199c6a324e3e82f2e70b3218178f31726fb82', '__omit_place_holder__e5d199c6a324e3e82f2e70b3218178f31726fb82'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-26 17:03:58.964145 | orchestrator | 2025-03-26 17:03:58.964159 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-03-26 17:03:58.964173 | orchestrator | Wednesday 26 March 2025 16:54:59 +0000 (0:00:03.222) 0:00:25.057 ******* 2025-03-26 17:03:58.964187 | orchestrator | changed: [testbed-node-0] 2025-03-26 17:03:58.964201 | orchestrator | changed: [testbed-node-1] 2025-03-26 17:03:58.964323 | orchestrator | changed: [testbed-node-2] 2025-03-26 17:03:58.964340 | orchestrator | 2025-03-26 17:03:58.964360 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-03-26 17:03:58.964374 | orchestrator | Wednesday 26 March 2025 16:55:02 +0000 (0:00:03.370) 0:00:28.428 ******* 2025-03-26 17:03:58.964415 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-03-26 17:03:58.964431 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-03-26 17:03:58.964445 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-03-26 17:03:58.964459 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-03-26 17:03:58.964473 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-03-26 17:03:58.964487 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-03-26 17:03:58.964500 | orchestrator | 2025-03-26 17:03:58.964514 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-03-26 17:03:58.964528 | orchestrator | Wednesday 26 March 2025 16:55:10 +0000 (0:00:08.091) 0:00:36.520 ******* 2025-03-26 17:03:58.964542 | orchestrator | changed: [testbed-node-0] 2025-03-26 17:03:58.964555 | orchestrator | changed: [testbed-node-2] 2025-03-26 17:03:58.964569 | orchestrator | changed: [testbed-node-1] 2025-03-26 17:03:58.964583 | orchestrator | 2025-03-26 17:03:58.964596 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-03-26 17:03:58.964634 | orchestrator | Wednesday 26 March 2025 16:55:13 +0000 (0:00:02.846) 0:00:39.369 ******* 2025-03-26 17:03:58.964649 | orchestrator | ok: [testbed-node-0] 2025-03-26 17:03:58.964663 | orchestrator | ok: [testbed-node-1] 2025-03-26 17:03:58.964677 | orchestrator | ok: [testbed-node-2] 2025-03-26 17:03:58.964691 | orchestrator | 2025-03-26 17:03:58.964705 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-03-26 17:03:58.964719 | orchestrator | Wednesday 26 March 2025 16:55:19 +0000 (0:00:06.172) 0:00:45.541 ******* 2025-03-26 17:03:58.964733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-03-26 17:03:58.964756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-03-26 17:03:58.964771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-26 17:03:58.964785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-03-26 17:03:58.964808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-26 17:03:58.964823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-26 17:03:58.964931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-26 17:03:58.964956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-26 17:03:58.964970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-26 17:03:58.964985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e5d199c6a324e3e82f2e70b3218178f31726fb82', '__omit_place_holder__e5d199c6a324e3e82f2e70b3218178f31726fb82'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-26 17:03:58.965000 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.965040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e5d199c6a324e3e82f2e70b3218178f31726fb82', '__omit_place_holder__e5d199c6a324e3e82f2e70b3218178f31726fb82'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-26 17:03:58.965056 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.965078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e5d199c6a324e3e82f2e70b3218178f31726fb82', '__omit_place_holder__e5d199c6a324e3e82f2e70b3218178f31726fb82'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-26 17:03:58.965093 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.965107 | orchestrator | 2025-03-26 17:03:58.965121 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-03-26 17:03:58.965135 | orchestrator | Wednesday 26 March 2025 16:55:26 +0000 (0:00:06.433) 0:00:51.974 ******* 2025-03-26 17:03:58.965149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-03-26 17:03:58.965173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-03-26 17:03:58.965188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-03-26 17:03:58.965202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-26 17:03:58.965222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-26 17:03:58.965237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-26 17:03:58.965251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-26 17:03:58.965272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-26 17:03:58.965287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-26 17:03:58.965301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e5d199c6a324e3e82f2e70b3218178f31726fb82', '__omit_place_holder__e5d199c6a324e3e82f2e70b3218178f31726fb82'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-26 17:03:58.965316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e5d199c6a324e3e82f2e70b3218178f31726fb82', '__omit_place_holder__e5d199c6a324e3e82f2e70b3218178f31726fb82'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-26 17:03:58.965338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e5d199c6a324e3e82f2e70b3218178f31726fb82', '__omit_place_holder__e5d199c6a324e3e82f2e70b3218178f31726fb82'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-26 17:03:58.966718 | orchestrator | 2025-03-26 17:03:58.966762 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-03-26 17:03:58.966778 | orchestrator | Wednesday 26 March 2025 16:55:32 +0000 (0:00:06.158) 0:00:58.133 ******* 2025-03-26 17:03:58.966795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-03-26 17:03:58.966826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-03-26 17:03:58.966872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-03-26 17:03:58.966888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-26 17:03:58.966947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-26 17:03:58.966975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-26 17:03:58.966991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-26 17:03:58.967019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e5d199c6a324e3e82f2e70b3218178f31726fb82', '__omit_place_holder__e5d199c6a324e3e82f2e70b3218178f31726fb82'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-26 17:03:58.967039 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-26 17:03:58.967055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e5d199c6a324e3e82f2e70b3218178f31726fb82', '__omit_place_holder__e5d199c6a324e3e82f2e70b3218178f31726fb82'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-26 17:03:58.967069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-26 17:03:58.967083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e5d199c6a324e3e82f2e70b3218178f31726fb82', '__omit_place_holder__e5d199c6a324e3e82f2e70b3218178f31726fb82'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-26 17:03:58.967098 | orchestrator | 2025-03-26 17:03:58.967244 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-03-26 17:03:58.967260 | orchestrator | Wednesday 26 March 2025 16:55:35 +0000 (0:00:03.640) 0:01:01.773 ******* 2025-03-26 17:03:58.967282 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-03-26 17:03:58.967311 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-03-26 17:03:58.967326 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-03-26 17:03:58.967340 | orchestrator | 2025-03-26 17:03:58.967354 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-03-26 17:03:58.967368 | orchestrator | Wednesday 26 March 2025 16:55:41 +0000 (0:00:05.736) 0:01:07.510 ******* 2025-03-26 17:03:58.967382 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-03-26 17:03:58.967396 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-03-26 17:03:58.967410 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-03-26 17:03:58.967423 | orchestrator | 2025-03-26 17:03:58.967437 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-03-26 17:03:58.967451 | orchestrator | Wednesday 26 March 2025 16:55:46 +0000 (0:00:05.343) 0:01:12.853 ******* 2025-03-26 17:03:58.967465 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.967507 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.967522 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.967536 | orchestrator | 2025-03-26 17:03:58.967550 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-03-26 17:03:58.967564 | orchestrator | Wednesday 26 March 2025 16:55:48 +0000 (0:00:01.476) 0:01:14.330 ******* 2025-03-26 17:03:58.967578 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-03-26 17:03:58.967593 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-03-26 17:03:58.967608 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-03-26 17:03:58.967622 | orchestrator | 2025-03-26 17:03:58.967636 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-03-26 17:03:58.967712 | orchestrator | Wednesday 26 March 2025 16:55:54 +0000 (0:00:05.752) 0:01:20.083 ******* 2025-03-26 17:03:58.967728 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-03-26 17:03:58.967742 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-03-26 17:03:58.967756 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-03-26 17:03:58.967770 | orchestrator | 2025-03-26 17:03:58.967784 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-03-26 17:03:58.967798 | orchestrator | Wednesday 26 March 2025 16:55:58 +0000 (0:00:03.940) 0:01:24.024 ******* 2025-03-26 17:03:58.967812 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-03-26 17:03:58.967953 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-03-26 17:03:58.967976 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-03-26 17:03:58.967990 | orchestrator | 2025-03-26 17:03:58.968004 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-03-26 17:03:58.968018 | orchestrator | Wednesday 26 March 2025 16:56:00 +0000 (0:00:02.832) 0:01:26.856 ******* 2025-03-26 17:03:58.968032 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-03-26 17:03:58.968046 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-03-26 17:03:58.968060 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-03-26 17:03:58.968074 | orchestrator | 2025-03-26 17:03:58.968093 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-03-26 17:03:58.968116 | orchestrator | Wednesday 26 March 2025 16:56:03 +0000 (0:00:02.246) 0:01:29.103 ******* 2025-03-26 17:03:58.968130 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 17:03:58.968145 | orchestrator | 2025-03-26 17:03:58.968159 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-03-26 17:03:58.968173 | orchestrator | Wednesday 26 March 2025 16:56:04 +0000 (0:00:01.099) 0:01:30.202 ******* 2025-03-26 17:03:58.968187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-03-26 17:03:58.968212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-03-26 17:03:58.968283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-03-26 17:03:58.968299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-26 17:03:58.968312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-26 17:03:58.968326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-26 17:03:58.968346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-26 17:03:58.968370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-26 17:03:58.968389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-26 17:03:58.968402 | orchestrator | 2025-03-26 17:03:58.968414 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-03-26 17:03:58.968427 | orchestrator | Wednesday 26 March 2025 16:56:08 +0000 (0:00:03.946) 0:01:34.148 ******* 2025-03-26 17:03:58.968439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-03-26 17:03:58.968452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-26 17:03:58.968465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-26 17:03:58.968489 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.968502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-03-26 17:03:58.968515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-26 17:03:58.968633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-26 17:03:58.968648 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.968684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-03-26 17:03:58.968698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-26 17:03:58.968712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-26 17:03:58.968731 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.968744 | orchestrator | 2025-03-26 17:03:58.968757 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-03-26 17:03:58.968769 | orchestrator | Wednesday 26 March 2025 16:56:09 +0000 (0:00:01.777) 0:01:35.925 ******* 2025-03-26 17:03:58.968782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-03-26 17:03:58.968795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-26 17:03:58.968878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-26 17:03:58.968894 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.968907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-03-26 17:03:58.968920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-26 17:03:58.968933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-26 17:03:58.968953 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.968966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-03-26 17:03:58.968979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-03-26 17:03:58.968992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-03-26 17:03:58.969005 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.969017 | orchestrator | 2025-03-26 17:03:58.969030 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-03-26 17:03:58.969047 | orchestrator | Wednesday 26 March 2025 16:56:12 +0000 (0:00:02.105) 0:01:38.031 ******* 2025-03-26 17:03:58.969060 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-03-26 17:03:58.969073 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-03-26 17:03:58.969120 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-03-26 17:03:58.969134 | orchestrator | 2025-03-26 17:03:58.969146 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-03-26 17:03:58.969159 | orchestrator | Wednesday 26 March 2025 16:56:14 +0000 (0:00:02.058) 0:01:40.090 ******* 2025-03-26 17:03:58.969197 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-03-26 17:03:58.969211 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-03-26 17:03:58.969224 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-03-26 17:03:58.969236 | orchestrator | 2025-03-26 17:03:58.969249 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-03-26 17:03:58.969261 | orchestrator | Wednesday 26 March 2025 16:56:16 +0000 (0:00:02.564) 0:01:42.654 ******* 2025-03-26 17:03:58.969274 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-03-26 17:03:58.969286 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-03-26 17:03:58.969306 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-03-26 17:03:58.969318 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-03-26 17:03:58.969331 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.969343 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-03-26 17:03:58.969356 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.969368 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-03-26 17:03:58.969380 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.969393 | orchestrator | 2025-03-26 17:03:58.969405 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-03-26 17:03:58.969417 | orchestrator | Wednesday 26 March 2025 16:56:19 +0000 (0:00:02.589) 0:01:45.244 ******* 2025-03-26 17:03:58.969435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-03-26 17:03:58.969449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-03-26 17:03:58.969462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-03-26 17:03:58.969525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-26 17:03:58.969542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-26 17:03:58.969567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-03-26 17:03:58.969581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-26 17:03:58.969594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e5d199c6a324e3e82f2e70b3218178f31726fb82', '__omit_place_holder__e5d199c6a324e3e82f2e70b3218178f31726fb82'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-26 17:03:58.969607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-26 17:03:58.969663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e5d199c6a324e3e82f2e70b3218178f31726fb82', '__omit_place_holder__e5d199c6a324e3e82f2e70b3218178f31726fb82'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-26 17:03:58.969677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-03-26 17:03:58.969697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e5d199c6a324e3e82f2e70b3218178f31726fb82', '__omit_place_holder__e5d199c6a324e3e82f2e70b3218178f31726fb82'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-03-26 17:03:58.969710 | orchestrator | 2025-03-26 17:03:58.969722 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-03-26 17:03:58.969735 | orchestrator | Wednesday 26 March 2025 16:56:23 +0000 (0:00:03.774) 0:01:49.018 ******* 2025-03-26 17:03:58.969748 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 17:03:58.969761 | orchestrator | 2025-03-26 17:03:58.969773 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-03-26 17:03:58.969786 | orchestrator | Wednesday 26 March 2025 16:56:24 +0000 (0:00:01.297) 0:01:50.316 ******* 2025-03-26 17:03:58.969799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-03-26 17:03:58.969813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-03-26 17:03:58.969845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.969867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.969913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-03-26 17:03:58.969928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-03-26 17:03:58.969942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.969955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.969968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-03-26 17:03:58.969987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-03-26 17:03:58.970053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.970071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.970085 | orchestrator | 2025-03-26 17:03:58.970097 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-03-26 17:03:58.970110 | orchestrator | Wednesday 26 March 2025 16:56:32 +0000 (0:00:07.721) 0:01:58.038 ******* 2025-03-26 17:03:58.970123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-03-26 17:03:58.970136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-03-26 17:03:58.970149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.970245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.970269 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.970296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-03-26 17:03:58.970310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-03-26 17:03:58.970323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.970336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.970350 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.970362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-03-26 17:03:58.970437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-03-26 17:03:58.970455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.970468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.970481 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.970498 | orchestrator | 2025-03-26 17:03:58.970511 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-03-26 17:03:58.970524 | orchestrator | Wednesday 26 March 2025 16:56:32 +0000 (0:00:00.788) 0:01:58.827 ******* 2025-03-26 17:03:58.970536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-03-26 17:03:58.970550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-03-26 17:03:58.970564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-03-26 17:03:58.970576 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.970589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-03-26 17:03:58.970601 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.970614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-03-26 17:03:58.970626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-03-26 17:03:58.970638 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.970651 | orchestrator | 2025-03-26 17:03:58.970663 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-03-26 17:03:58.970675 | orchestrator | Wednesday 26 March 2025 16:56:34 +0000 (0:00:01.445) 0:02:00.272 ******* 2025-03-26 17:03:58.970688 | orchestrator | changed: [testbed-node-0] 2025-03-26 17:03:58.970700 | orchestrator | changed: [testbed-node-1] 2025-03-26 17:03:58.970718 | orchestrator | changed: [testbed-node-2] 2025-03-26 17:03:58.970731 | orchestrator | 2025-03-26 17:03:58.970743 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-03-26 17:03:58.970755 | orchestrator | Wednesday 26 March 2025 16:56:36 +0000 (0:00:01.859) 0:02:02.132 ******* 2025-03-26 17:03:58.970768 | orchestrator | changed: [testbed-node-0] 2025-03-26 17:03:58.970780 | orchestrator | changed: [testbed-node-1] 2025-03-26 17:03:58.970792 | orchestrator | changed: [testbed-node-2] 2025-03-26 17:03:58.970805 | orchestrator | 2025-03-26 17:03:58.970817 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-03-26 17:03:58.970878 | orchestrator | Wednesday 26 March 2025 16:56:38 +0000 (0:00:02.589) 0:02:04.721 ******* 2025-03-26 17:03:58.970892 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 17:03:58.970905 | orchestrator | 2025-03-26 17:03:58.970917 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-03-26 17:03:58.970930 | orchestrator | Wednesday 26 March 2025 16:56:39 +0000 (0:00:01.088) 0:02:05.810 ******* 2025-03-26 17:03:58.971068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-26 17:03:58.971098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.971112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.971126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-26 17:03:58.971144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.971161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.971173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-03-26 17:03:58.971184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.971194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.971210 | orchestrator | 2025-03-26 17:03:58.971220 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-03-26 17:03:58.971230 | orchestrator | Wednesday 26 March 2025 16:56:48 +0000 (0:00:08.787) 0:02:14.598 ******* 2025-03-26 17:03:58.971247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-03-26 17:03:58.971264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.971276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.971286 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.971297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-03-26 17:03:58.971315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.971331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.971342 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.971357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-03-26 17:03:58.971369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.971379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.971390 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.971400 | orchestrator | 2025-03-26 17:03:58.971410 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-03-26 17:03:58.971420 | orchestrator | Wednesday 26 March 2025 16:56:49 +0000 (0:00:01.157) 0:02:15.755 ******* 2025-03-26 17:03:58.971431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-03-26 17:03:58.971446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-03-26 17:03:58.971456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-03-26 17:03:58.971467 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.971501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-03-26 17:03:58.971513 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.971523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-03-26 17:03:58.971533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-03-26 17:03:58.971543 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.971554 | orchestrator | 2025-03-26 17:03:58.971564 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-03-26 17:03:58.971574 | orchestrator | Wednesday 26 March 2025 16:56:51 +0000 (0:00:01.502) 0:02:17.258 ******* 2025-03-26 17:03:58.971584 | orchestrator | changed: [testbed-node-0] 2025-03-26 17:03:58.971595 | orchestrator | changed: [testbed-node-1] 2025-03-26 17:03:58.971605 | orchestrator | changed: [testbed-node-2] 2025-03-26 17:03:58.971615 | orchestrator | 2025-03-26 17:03:58.971625 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-03-26 17:03:58.971635 | orchestrator | Wednesday 26 March 2025 16:56:52 +0000 (0:00:01.528) 0:02:18.787 ******* 2025-03-26 17:03:58.971645 | orchestrator | changed: [testbed-node-0] 2025-03-26 17:03:58.971655 | orchestrator | changed: [testbed-node-1] 2025-03-26 17:03:58.971665 | orchestrator | changed: [testbed-node-2] 2025-03-26 17:03:58.971675 | orchestrator | 2025-03-26 17:03:58.971685 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-03-26 17:03:58.971696 | orchestrator | Wednesday 26 March 2025 16:56:55 +0000 (0:00:02.535) 0:02:21.322 ******* 2025-03-26 17:03:58.971706 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.971716 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.971726 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.971736 | orchestrator | 2025-03-26 17:03:58.971751 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-03-26 17:03:58.971762 | orchestrator | Wednesday 26 March 2025 16:56:55 +0000 (0:00:00.515) 0:02:21.837 ******* 2025-03-26 17:03:58.971772 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 17:03:58.971782 | orchestrator | 2025-03-26 17:03:58.971792 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-03-26 17:03:58.971802 | orchestrator | Wednesday 26 March 2025 16:56:57 +0000 (0:00:01.267) 0:02:23.105 ******* 2025-03-26 17:03:58.971820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-03-26 17:03:58.971905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-03-26 17:03:58.971919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-03-26 17:03:58.971929 | orchestrator | 2025-03-26 17:03:58.971939 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-03-26 17:03:58.971949 | orchestrator | Wednesday 26 March 2025 16:57:01 +0000 (0:00:04.234) 0:02:27.339 ******* 2025-03-26 17:03:58.971959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-03-26 17:03:58.971970 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.971998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-03-26 17:03:58.972015 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.972026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-03-26 17:03:58.972036 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.972046 | orchestrator | 2025-03-26 17:03:58.972057 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-03-26 17:03:58.972067 | orchestrator | Wednesday 26 March 2025 16:57:03 +0000 (0:00:02.232) 0:02:29.572 ******* 2025-03-26 17:03:58.972077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-03-26 17:03:58.972088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-03-26 17:03:58.972099 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.972109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-03-26 17:03:58.972119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-03-26 17:03:58.972130 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.972140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-03-26 17:03:58.972159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-03-26 17:03:58.972176 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.972186 | orchestrator | 2025-03-26 17:03:58.972197 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-03-26 17:03:58.972207 | orchestrator | Wednesday 26 March 2025 16:57:07 +0000 (0:00:03.863) 0:02:33.435 ******* 2025-03-26 17:03:58.972217 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.972227 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.972237 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.972247 | orchestrator | 2025-03-26 17:03:58.972258 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-03-26 17:03:58.972268 | orchestrator | Wednesday 26 March 2025 16:57:08 +0000 (0:00:00.905) 0:02:34.341 ******* 2025-03-26 17:03:58.972278 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.972288 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.972298 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.972308 | orchestrator | 2025-03-26 17:03:58.972318 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-03-26 17:03:58.972328 | orchestrator | Wednesday 26 March 2025 16:57:10 +0000 (0:00:02.220) 0:02:36.562 ******* 2025-03-26 17:03:58.972338 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 17:03:58.972348 | orchestrator | 2025-03-26 17:03:58.972358 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-03-26 17:03:58.972368 | orchestrator | Wednesday 26 March 2025 16:57:12 +0000 (0:00:01.647) 0:02:38.210 ******* 2025-03-26 17:03:58.972378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-26 17:03:58.972389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.972400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.972424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.972441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-26 17:03:58.972452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.972463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.972480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.972496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-03-26 17:03:58.972512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.972522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.972539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.972550 | orchestrator | 2025-03-26 17:03:58.972560 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-03-26 17:03:58.972570 | orchestrator | Wednesday 26 March 2025 16:57:18 +0000 (0:00:06.593) 0:02:44.804 ******* 2025-03-26 17:03:58.972580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-26 17:03:58.972596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.972691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.972714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.972725 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.972735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-26 17:03:58.972746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.972757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.972793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.972805 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.972815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-03-26 17:03:58.972839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.972851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.972868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.972879 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.972889 | orchestrator | 2025-03-26 17:03:58.972899 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-03-26 17:03:58.972914 | orchestrator | Wednesday 26 March 2025 16:57:19 +0000 (0:00:01.149) 0:02:45.953 ******* 2025-03-26 17:03:58.972925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-03-26 17:03:58.972939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-03-26 17:03:58.972950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-03-26 17:03:58.972961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-03-26 17:03:58.972971 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.972981 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.972991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-03-26 17:03:58.973002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-03-26 17:03:58.973012 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.973022 | orchestrator | 2025-03-26 17:03:58.973032 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-03-26 17:03:58.973042 | orchestrator | Wednesday 26 March 2025 16:57:21 +0000 (0:00:01.809) 0:02:47.763 ******* 2025-03-26 17:03:58.973052 | orchestrator | changed: [testbed-node-0] 2025-03-26 17:03:58.973062 | orchestrator | changed: [testbed-node-1] 2025-03-26 17:03:58.973072 | orchestrator | changed: [testbed-node-2] 2025-03-26 17:03:58.973082 | orchestrator | 2025-03-26 17:03:58.973092 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-03-26 17:03:58.973102 | orchestrator | Wednesday 26 March 2025 16:57:23 +0000 (0:00:01.803) 0:02:49.566 ******* 2025-03-26 17:03:58.973113 | orchestrator | changed: [testbed-node-0] 2025-03-26 17:03:58.973123 | orchestrator | changed: [testbed-node-1] 2025-03-26 17:03:58.973133 | orchestrator | changed: [testbed-node-2] 2025-03-26 17:03:58.973167 | orchestrator | 2025-03-26 17:03:58.973179 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-03-26 17:03:58.973189 | orchestrator | Wednesday 26 March 2025 16:57:26 +0000 (0:00:02.739) 0:02:52.306 ******* 2025-03-26 17:03:58.973199 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.973209 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.973219 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.973242 | orchestrator | 2025-03-26 17:03:58.973253 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-03-26 17:03:58.973263 | orchestrator | Wednesday 26 March 2025 16:57:26 +0000 (0:00:00.564) 0:02:52.870 ******* 2025-03-26 17:03:58.973273 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.973283 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.973293 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.973303 | orchestrator | 2025-03-26 17:03:58.973313 | orchestrator | TASK [include_role : designate] ************************************************ 2025-03-26 17:03:58.973323 | orchestrator | Wednesday 26 March 2025 16:57:27 +0000 (0:00:00.371) 0:02:53.242 ******* 2025-03-26 17:03:58.973333 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 17:03:58.973343 | orchestrator | 2025-03-26 17:03:58.973353 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-03-26 17:03:58.973363 | orchestrator | Wednesday 26 March 2025 16:57:28 +0000 (0:00:01.330) 0:02:54.572 ******* 2025-03-26 17:03:58.973374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-26 17:03:58.973467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-26 17:03:58.973480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.973491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.973503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-26 17:03:58.973520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.973531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-26 17:03:58.973553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.973565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.973576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.973587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.973603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.973614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.973631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.973648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-03-26 17:03:58.973659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-26 17:03:58.973669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.973685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.973696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.973713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.973724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.973735 | orchestrator | 2025-03-26 17:03:58.973749 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-03-26 17:03:58.973760 | orchestrator | Wednesday 26 March 2025 16:57:36 +0000 (0:00:07.919) 0:03:02.492 ******* 2025-03-26 17:03:58.973771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-26 17:03:58.973787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-26 17:03:58.973804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.973815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.973867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-26 17:03:58.973887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.973898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-26 17:03:58.973915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.973926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.973941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.973950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.973959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.973968 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.973981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.974041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.974054 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.974063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-03-26 17:03:58.974072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-03-26 17:03:58.974081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.974090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.974104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.974121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.974136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.974146 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.974154 | orchestrator | 2025-03-26 17:03:58.974163 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-03-26 17:03:58.974172 | orchestrator | Wednesday 26 March 2025 16:57:38 +0000 (0:00:01.581) 0:03:04.073 ******* 2025-03-26 17:03:58.974180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-03-26 17:03:58.974189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-03-26 17:03:58.974199 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.974208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-03-26 17:03:58.974217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-03-26 17:03:58.974225 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.974234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-03-26 17:03:58.974243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-03-26 17:03:58.974251 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.974260 | orchestrator | 2025-03-26 17:03:58.974268 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-03-26 17:03:58.974277 | orchestrator | Wednesday 26 March 2025 16:57:40 +0000 (0:00:01.892) 0:03:05.966 ******* 2025-03-26 17:03:58.974285 | orchestrator | changed: [testbed-node-0] 2025-03-26 17:03:58.974294 | orchestrator | changed: [testbed-node-1] 2025-03-26 17:03:58.974302 | orchestrator | changed: [testbed-node-2] 2025-03-26 17:03:58.974311 | orchestrator | 2025-03-26 17:03:58.974319 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-03-26 17:03:58.974328 | orchestrator | Wednesday 26 March 2025 16:57:42 +0000 (0:00:02.285) 0:03:08.252 ******* 2025-03-26 17:03:58.974336 | orchestrator | changed: [testbed-node-0] 2025-03-26 17:03:58.974345 | orchestrator | changed: [testbed-node-1] 2025-03-26 17:03:58.974358 | orchestrator | changed: [testbed-node-2] 2025-03-26 17:03:58.974367 | orchestrator | 2025-03-26 17:03:58.974375 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-03-26 17:03:58.974384 | orchestrator | Wednesday 26 March 2025 16:57:44 +0000 (0:00:02.556) 0:03:10.808 ******* 2025-03-26 17:03:58.974392 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.974401 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.974409 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.974418 | orchestrator | 2025-03-26 17:03:58.974489 | orchestrator | TASK [include_role : glance] *************************************************** 2025-03-26 17:03:58.974503 | orchestrator | Wednesday 26 March 2025 16:57:45 +0000 (0:00:01.101) 0:03:11.909 ******* 2025-03-26 17:03:58.974512 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 17:03:58.974520 | orchestrator | 2025-03-26 17:03:58.974529 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-03-26 17:03:58.974537 | orchestrator | Wednesday 26 March 2025 16:57:47 +0000 (0:00:01.520) 0:03:13.430 ******* 2025-03-26 17:03:58.974547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-03-26 17:03:58.974558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-26 17:03:58.974577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-03-26 17:03:58.974587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-26 17:03:58.974607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-03-26 17:03:58.974617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-26 17:03:58.974626 | orchestrator | 2025-03-26 17:03:58.974635 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-03-26 17:03:58.974648 | orchestrator | Wednesday 26 March 2025 16:57:57 +0000 (0:00:09.576) 0:03:23.006 ******* 2025-03-26 17:03:58.974662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-03-26 17:03:58.974672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-26 17:03:58.974681 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.974694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-03-26 17:03:58.974708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-26 17:03:58.974717 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.974726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-03-26 17:03:58.974745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-03-26 17:03:58.974754 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.974763 | orchestrator | 2025-03-26 17:03:58.974772 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-03-26 17:03:58.974780 | orchestrator | Wednesday 26 March 2025 16:58:03 +0000 (0:00:06.630) 0:03:29.636 ******* 2025-03-26 17:03:58.974789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-03-26 17:03:58.974798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-03-26 17:03:58.974812 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.974821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-03-26 17:03:58.974845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-03-26 17:03:58.974860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-03-26 17:03:58.974869 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.974878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-03-26 17:03:58.974887 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.974895 | orchestrator | 2025-03-26 17:03:58.974904 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-03-26 17:03:58.974916 | orchestrator | Wednesday 26 March 2025 16:58:09 +0000 (0:00:05.735) 0:03:35.372 ******* 2025-03-26 17:03:58.974925 | orchestrator | changed: [testbed-node-0] 2025-03-26 17:03:58.974933 | orchestrator | changed: [testbed-node-1] 2025-03-26 17:03:58.974942 | orchestrator | changed: [testbed-node-2] 2025-03-26 17:03:58.974950 | orchestrator | 2025-03-26 17:03:58.974959 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-03-26 17:03:58.974967 | orchestrator | Wednesday 26 March 2025 16:58:11 +0000 (0:00:01.603) 0:03:36.976 ******* 2025-03-26 17:03:58.974976 | orchestrator | changed: [testbed-node-0] 2025-03-26 17:03:58.974984 | orchestrator | changed: [testbed-node-1] 2025-03-26 17:03:58.974993 | orchestrator | changed: [testbed-node-2] 2025-03-26 17:03:58.975001 | orchestrator | 2025-03-26 17:03:58.975010 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-03-26 17:03:58.975019 | orchestrator | Wednesday 26 March 2025 16:58:13 +0000 (0:00:02.593) 0:03:39.569 ******* 2025-03-26 17:03:58.975027 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.975040 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.975049 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.975058 | orchestrator | 2025-03-26 17:03:58.975066 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-03-26 17:03:58.975075 | orchestrator | Wednesday 26 March 2025 16:58:14 +0000 (0:00:00.549) 0:03:40.119 ******* 2025-03-26 17:03:58.975083 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 17:03:58.975092 | orchestrator | 2025-03-26 17:03:58.975100 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-03-26 17:03:58.975109 | orchestrator | Wednesday 26 March 2025 16:58:15 +0000 (0:00:01.652) 0:03:41.772 ******* 2025-03-26 17:03:58.975118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-26 17:03:58.975127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-26 17:03:58.975141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-03-26 17:03:58.975150 | orchestrator | 2025-03-26 17:03:58.975158 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-03-26 17:03:58.975167 | orchestrator | Wednesday 26 March 2025 16:58:20 +0000 (0:00:04.843) 0:03:46.615 ******* 2025-03-26 17:03:58.975176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-03-26 17:03:58.975185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-03-26 17:03:58.975198 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.975207 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.975216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-03-26 17:03:58.975225 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.975233 | orchestrator | 2025-03-26 17:03:58.975242 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-03-26 17:03:58.975251 | orchestrator | Wednesday 26 March 2025 16:58:21 +0000 (0:00:00.437) 0:03:47.052 ******* 2025-03-26 17:03:58.975259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-03-26 17:03:58.975272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-03-26 17:03:58.975281 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.975289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-03-26 17:03:58.975298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-03-26 17:03:58.975307 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.975315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-03-26 17:03:58.975327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-03-26 17:03:58.977352 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.977437 | orchestrator | 2025-03-26 17:03:58.977448 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-03-26 17:03:58.977456 | orchestrator | Wednesday 26 March 2025 16:58:22 +0000 (0:00:01.098) 0:03:48.151 ******* 2025-03-26 17:03:58.977464 | orchestrator | changed: [testbed-node-0] 2025-03-26 17:03:58.977472 | orchestrator | changed: [testbed-node-1] 2025-03-26 17:03:58.977480 | orchestrator | changed: [testbed-node-2] 2025-03-26 17:03:58.977488 | orchestrator | 2025-03-26 17:03:58.977496 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-03-26 17:03:58.977504 | orchestrator | Wednesday 26 March 2025 16:58:23 +0000 (0:00:01.334) 0:03:49.485 ******* 2025-03-26 17:03:58.977512 | orchestrator | changed: [testbed-node-0] 2025-03-26 17:03:58.977520 | orchestrator | changed: [testbed-node-1] 2025-03-26 17:03:58.977528 | orchestrator | changed: [testbed-node-2] 2025-03-26 17:03:58.977543 | orchestrator | 2025-03-26 17:03:58.977552 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-03-26 17:03:58.977560 | orchestrator | Wednesday 26 March 2025 16:58:26 +0000 (0:00:02.543) 0:03:52.029 ******* 2025-03-26 17:03:58.977568 | orchestrator | included: heat for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 17:03:58.977576 | orchestrator | 2025-03-26 17:03:58.977584 | orchestrator | TASK [haproxy-config : Copying over heat haproxy config] *********************** 2025-03-26 17:03:58.977592 | orchestrator | Wednesday 26 March 2025 16:58:27 +0000 (0:00:01.463) 0:03:53.493 ******* 2025-03-26 17:03:58.977601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-03-26 17:03:58.977611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-03-26 17:03:58.977620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-03-26 17:03:58.977636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-03-26 17:03:58.977649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.977658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-03-26 17:03:58.977666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.977675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-03-26 17:03:58.977683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.977691 | orchestrator | 2025-03-26 17:03:58.977703 | orchestrator | TASK [haproxy-config : Add configuration for heat when using single external frontend] *** 2025-03-26 17:03:58.977711 | orchestrator | Wednesday 26 March 2025 16:58:36 +0000 (0:00:09.310) 0:04:02.803 ******* 2025-03-26 17:03:58.977726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-03-26 17:03:58.977734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-03-26 17:03:58.977742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-03-26 17:03:58.977751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.977759 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.977771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-03-26 17:03:58.977783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.977792 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.977800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-03-26 17:03:58.977809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-03-26 17:03:58.977817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.977825 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.977857 | orchestrator | 2025-03-26 17:03:58.977866 | orchestrator | TASK [haproxy-config : Configuring firewall for heat] ************************** 2025-03-26 17:03:58.977874 | orchestrator | Wednesday 26 March 2025 16:58:38 +0000 (0:00:01.555) 0:04:04.358 ******* 2025-03-26 17:03:58.977882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-03-26 17:03:58.977891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-03-26 17:03:58.977903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-03-26 17:03:58.977916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-03-26 17:03:58.977925 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.977933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-03-26 17:03:58.977941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-03-26 17:03:58.977955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-03-26 17:03:58.977963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-03-26 17:03:58.977971 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.977979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-03-26 17:03:58.977987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-03-26 17:03:58.977995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-03-26 17:03:58.978003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-03-26 17:03:58.978010 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.978056 | orchestrator | 2025-03-26 17:03:58.978065 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL users config] *************** 2025-03-26 17:03:58.978073 | orchestrator | Wednesday 26 March 2025 16:58:39 +0000 (0:00:01.244) 0:04:05.602 ******* 2025-03-26 17:03:58.978081 | orchestrator | changed: [testbed-node-0] 2025-03-26 17:03:58.978089 | orchestrator | changed: [testbed-node-1] 2025-03-26 17:03:58.978097 | orchestrator | changed: [testbed-node-2] 2025-03-26 17:03:58.978106 | orchestrator | 2025-03-26 17:03:58.978114 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL rules config] *************** 2025-03-26 17:03:58.978122 | orchestrator | Wednesday 26 March 2025 16:58:41 +0000 (0:00:01.526) 0:04:07.129 ******* 2025-03-26 17:03:58.978130 | orchestrator | changed: [testbed-node-0] 2025-03-26 17:03:58.978138 | orchestrator | changed: [testbed-node-1] 2025-03-26 17:03:58.978146 | orchestrator | changed: [testbed-node-2] 2025-03-26 17:03:58.978155 | orchestrator | 2025-03-26 17:03:58.978163 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-03-26 17:03:58.978171 | orchestrator | Wednesday 26 March 2025 16:58:43 +0000 (0:00:02.507) 0:04:09.636 ******* 2025-03-26 17:03:58.978182 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 17:03:58.978194 | orchestrator | 2025-03-26 17:03:58.978202 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-03-26 17:03:58.978210 | orchestrator | Wednesday 26 March 2025 16:58:44 +0000 (0:00:01.214) 0:04:10.850 ******* 2025-03-26 17:03:58.978224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-03-26 17:03:58.978234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-03-26 17:03:58.978264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-03-26 17:03:58.978273 | orchestrator | 2025-03-26 17:03:58.978282 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-03-26 17:03:58.978290 | orchestrator | Wednesday 26 March 2025 16:58:50 +0000 (0:00:05.194) 0:04:16.045 ******* 2025-03-26 17:03:58.978298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-03-26 17:03:58.978310 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.978328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-03-26 17:03:58.978337 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.978346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-03-26 17:03:58.978366 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.978374 | orchestrator | 2025-03-26 17:03:58.978385 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-03-26 17:03:58.978394 | orchestrator | Wednesday 26 March 2025 16:58:51 +0000 (0:00:01.089) 0:04:17.135 ******* 2025-03-26 17:03:58.978402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-03-26 17:03:58.978411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-03-26 17:03:58.978421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-03-26 17:03:58.978430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-03-26 17:03:58.978439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-03-26 17:03:58.978448 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.978461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-03-26 17:03:58.978470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-03-26 17:03:58.978481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-03-26 17:03:58.978489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-03-26 17:03:58.978497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-03-26 17:03:58.978505 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.978513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-03-26 17:03:58.978521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-03-26 17:03:58.978533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-03-26 17:03:58.978541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-03-26 17:03:58.978549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-03-26 17:03:58.978557 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.978565 | orchestrator | 2025-03-26 17:03:58.978574 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-03-26 17:03:58.978581 | orchestrator | Wednesday 26 March 2025 16:58:52 +0000 (0:00:01.667) 0:04:18.803 ******* 2025-03-26 17:03:58.978589 | orchestrator | changed: [testbed-node-0] 2025-03-26 17:03:58.978598 | orchestrator | changed: [testbed-node-1] 2025-03-26 17:03:58.978606 | orchestrator | changed: [testbed-node-2] 2025-03-26 17:03:58.978613 | orchestrator | 2025-03-26 17:03:58.978621 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-03-26 17:03:58.978629 | orchestrator | Wednesday 26 March 2025 16:58:54 +0000 (0:00:01.515) 0:04:20.318 ******* 2025-03-26 17:03:58.978637 | orchestrator | changed: [testbed-node-0] 2025-03-26 17:03:58.978645 | orchestrator | changed: [testbed-node-1] 2025-03-26 17:03:58.978653 | orchestrator | changed: [testbed-node-2] 2025-03-26 17:03:58.978661 | orchestrator | 2025-03-26 17:03:58.978669 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-03-26 17:03:58.978677 | orchestrator | Wednesday 26 March 2025 16:58:57 +0000 (0:00:02.838) 0:04:23.157 ******* 2025-03-26 17:03:58.978688 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.978696 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.978704 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.978712 | orchestrator | 2025-03-26 17:03:58.978720 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-03-26 17:03:58.978728 | orchestrator | Wednesday 26 March 2025 16:58:57 +0000 (0:00:00.574) 0:04:23.732 ******* 2025-03-26 17:03:58.978736 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.978744 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.978752 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.978760 | orchestrator | 2025-03-26 17:03:58.978768 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-03-26 17:03:58.978776 | orchestrator | Wednesday 26 March 2025 16:58:58 +0000 (0:00:00.350) 0:04:24.082 ******* 2025-03-26 17:03:58.978784 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 17:03:58.978792 | orchestrator | 2025-03-26 17:03:58.978800 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-03-26 17:03:58.978807 | orchestrator | Wednesday 26 March 2025 16:58:59 +0000 (0:00:01.623) 0:04:25.706 ******* 2025-03-26 17:03:58.978816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-26 17:03:58.978825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-26 17:03:58.978852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-26 17:03:58.978861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-26 17:03:58.978873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-26 17:03:58.978882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-26 17:03:58.978890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-03-26 17:03:58.978902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-26 17:03:58.978911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-26 17:03:58.978923 | orchestrator | 2025-03-26 17:03:58.978931 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-03-26 17:03:58.978939 | orchestrator | Wednesday 26 March 2025 16:59:05 +0000 (0:00:05.895) 0:04:31.601 ******* 2025-03-26 17:03:58.978947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-03-26 17:03:58.978962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-26 17:03:58.978971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-26 17:03:58.978979 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.978990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-03-26 17:03:58.978999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-26 17:03:58.979011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-26 17:03:58.979019 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.979032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-03-26 17:03:58.979041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-03-26 17:03:58.979050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-03-26 17:03:58.979058 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.979066 | orchestrator | 2025-03-26 17:03:58.979074 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-03-26 17:03:58.979082 | orchestrator | Wednesday 26 March 2025 16:59:07 +0000 (0:00:01.397) 0:04:32.999 ******* 2025-03-26 17:03:58.979094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-03-26 17:03:58.979109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-03-26 17:03:58.979118 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.979126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-03-26 17:03:58.979134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-03-26 17:03:58.979142 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.979150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-03-26 17:03:58.979158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-03-26 17:03:58.979166 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.979174 | orchestrator | 2025-03-26 17:03:58.979182 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-03-26 17:03:58.979190 | orchestrator | Wednesday 26 March 2025 16:59:08 +0000 (0:00:01.374) 0:04:34.373 ******* 2025-03-26 17:03:58.979198 | orchestrator | changed: [testbed-node-0] 2025-03-26 17:03:58.979206 | orchestrator | changed: [testbed-node-1] 2025-03-26 17:03:58.979214 | orchestrator | changed: [testbed-node-2] 2025-03-26 17:03:58.979222 | orchestrator | 2025-03-26 17:03:58.979230 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-03-26 17:03:58.979237 | orchestrator | Wednesday 26 March 2025 16:59:10 +0000 (0:00:01.692) 0:04:36.066 ******* 2025-03-26 17:03:58.979245 | orchestrator | changed: [testbed-node-0] 2025-03-26 17:03:58.979253 | orchestrator | changed: [testbed-node-1] 2025-03-26 17:03:58.979261 | orchestrator | changed: [testbed-node-2] 2025-03-26 17:03:58.979269 | orchestrator | 2025-03-26 17:03:58.979277 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-03-26 17:03:58.979285 | orchestrator | Wednesday 26 March 2025 16:59:13 +0000 (0:00:03.046) 0:04:39.112 ******* 2025-03-26 17:03:58.979293 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.979301 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.979309 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.979316 | orchestrator | 2025-03-26 17:03:58.979324 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-03-26 17:03:58.979332 | orchestrator | Wednesday 26 March 2025 16:59:13 +0000 (0:00:00.350) 0:04:39.463 ******* 2025-03-26 17:03:58.979340 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 17:03:58.979348 | orchestrator | 2025-03-26 17:03:58.979359 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-03-26 17:03:58.979367 | orchestrator | Wednesday 26 March 2025 16:59:15 +0000 (0:00:01.640) 0:04:41.103 ******* 2025-03-26 17:03:58.979375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-26 17:03:58.979397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.979412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-26 17:03:58.979421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.979430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-03-26 17:03:58.979441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.979450 | orchestrator | 2025-03-26 17:03:58.979458 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-03-26 17:03:58.979466 | orchestrator | Wednesday 26 March 2025 16:59:20 +0000 (0:00:05.202) 0:04:46.306 ******* 2025-03-26 17:03:58.979483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-03-26 17:03:58.979492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.979500 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.979509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-03-26 17:03:58.979517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.979529 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.979547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-03-26 17:03:58.979556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.979564 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.979572 | orchestrator | 2025-03-26 17:03:58.979581 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-03-26 17:03:58.979589 | orchestrator | Wednesday 26 March 2025 16:59:21 +0000 (0:00:01.489) 0:04:47.795 ******* 2025-03-26 17:03:58.979597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-03-26 17:03:58.979605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-03-26 17:03:58.979615 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.979624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-03-26 17:03:58.979632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-03-26 17:03:58.979640 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.979648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-03-26 17:03:58.979656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-03-26 17:03:58.979664 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.979674 | orchestrator | 2025-03-26 17:03:58.979682 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-03-26 17:03:58.979690 | orchestrator | Wednesday 26 March 2025 16:59:23 +0000 (0:00:01.570) 0:04:49.366 ******* 2025-03-26 17:03:58.979698 | orchestrator | changed: [testbed-node-0] 2025-03-26 17:03:58.979706 | orchestrator | changed: [testbed-node-1] 2025-03-26 17:03:58.979714 | orchestrator | changed: [testbed-node-2] 2025-03-26 17:03:58.979721 | orchestrator | 2025-03-26 17:03:58.979729 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-03-26 17:03:58.979737 | orchestrator | Wednesday 26 March 2025 16:59:24 +0000 (0:00:01.532) 0:04:50.899 ******* 2025-03-26 17:03:58.979745 | orchestrator | changed: [testbed-node-0] 2025-03-26 17:03:58.979753 | orchestrator | changed: [testbed-node-1] 2025-03-26 17:03:58.979761 | orchestrator | changed: [testbed-node-2] 2025-03-26 17:03:58.979769 | orchestrator | 2025-03-26 17:03:58.979777 | orchestrator | TASK [include_role : manila] *************************************************** 2025-03-26 17:03:58.979785 | orchestrator | Wednesday 26 March 2025 16:59:27 +0000 (0:00:02.840) 0:04:53.740 ******* 2025-03-26 17:03:58.979793 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 17:03:58.979800 | orchestrator | 2025-03-26 17:03:58.979808 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-03-26 17:03:58.979816 | orchestrator | Wednesday 26 March 2025 16:59:29 +0000 (0:00:01.413) 0:04:55.153 ******* 2025-03-26 17:03:58.979838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-03-26 17:03:58.979848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.979856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.979870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.979882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-03-26 17:03:58.979891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-03-26 17:03:58.979903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.979916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.979925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.979933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.979947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.979956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.979964 | orchestrator | 2025-03-26 17:03:58.979972 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-03-26 17:03:58.979980 | orchestrator | Wednesday 26 March 2025 16:59:36 +0000 (0:00:07.162) 0:05:02.316 ******* 2025-03-26 17:03:58.979997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-03-26 17:03:58.980006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.980014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.980026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.980035 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.980047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-03-26 17:03:58.980060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.980073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.980082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.980090 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.980099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-03-26 17:03:58.980111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.980120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.980128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.980142 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.980151 | orchestrator | 2025-03-26 17:03:58.980159 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-03-26 17:03:58.980167 | orchestrator | Wednesday 26 March 2025 16:59:37 +0000 (0:00:01.256) 0:05:03.573 ******* 2025-03-26 17:03:58.980175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-03-26 17:03:58.980187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-03-26 17:03:58.980195 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.980203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-03-26 17:03:58.980211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-03-26 17:03:58.980219 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.980227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-03-26 17:03:58.980238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-03-26 17:03:58.980247 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.980255 | orchestrator | 2025-03-26 17:03:58.980262 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-03-26 17:03:58.980270 | orchestrator | Wednesday 26 March 2025 16:59:39 +0000 (0:00:01.592) 0:05:05.166 ******* 2025-03-26 17:03:58.980278 | orchestrator | changed: [testbed-node-0] 2025-03-26 17:03:58.980286 | orchestrator | changed: [testbed-node-1] 2025-03-26 17:03:58.980294 | orchestrator | changed: [testbed-node-2] 2025-03-26 17:03:58.980302 | orchestrator | 2025-03-26 17:03:58.980310 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-03-26 17:03:58.980318 | orchestrator | Wednesday 26 March 2025 16:59:40 +0000 (0:00:01.597) 0:05:06.763 ******* 2025-03-26 17:03:58.980326 | orchestrator | changed: [testbed-node-0] 2025-03-26 17:03:58.980334 | orchestrator | changed: [testbed-node-1] 2025-03-26 17:03:58.980342 | orchestrator | changed: [testbed-node-2] 2025-03-26 17:03:58.980350 | orchestrator | 2025-03-26 17:03:58.980357 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-03-26 17:03:58.980365 | orchestrator | Wednesday 26 March 2025 16:59:43 +0000 (0:00:02.538) 0:05:09.301 ******* 2025-03-26 17:03:58.980373 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 17:03:58.980381 | orchestrator | 2025-03-26 17:03:58.980389 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-03-26 17:03:58.980397 | orchestrator | Wednesday 26 March 2025 16:59:44 +0000 (0:00:01.597) 0:05:10.898 ******* 2025-03-26 17:03:58.980405 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-03-26 17:03:58.980413 | orchestrator | 2025-03-26 17:03:58.980421 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-03-26 17:03:58.980429 | orchestrator | Wednesday 26 March 2025 16:59:48 +0000 (0:00:03.653) 0:05:14.552 ******* 2025-03-26 17:03:58.980437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-03-26 17:03:58.980460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-03-26 17:03:58.980469 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.980478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-03-26 17:03:58.980487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-03-26 17:03:58.980495 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.980508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-03-26 17:03:58.980525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-03-26 17:03:58.980534 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.980542 | orchestrator | 2025-03-26 17:03:58.980550 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-03-26 17:03:58.980558 | orchestrator | Wednesday 26 March 2025 16:59:53 +0000 (0:00:04.412) 0:05:18.965 ******* 2025-03-26 17:03:58.980566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-03-26 17:03:58.980589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-03-26 17:03:58.980599 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.980607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-03-26 17:03:58.980615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-03-26 17:03:58.980624 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.980641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-03-26 17:03:58.980653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-03-26 17:03:58.980662 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.980670 | orchestrator | 2025-03-26 17:03:58.980678 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-03-26 17:03:58.980686 | orchestrator | Wednesday 26 March 2025 16:59:56 +0000 (0:00:03.862) 0:05:22.828 ******* 2025-03-26 17:03:58.980694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-03-26 17:03:58.980703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-03-26 17:03:58.980711 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.980720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-03-26 17:03:58.980728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-03-26 17:03:58.980739 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.980751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-03-26 17:03:58.980764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-03-26 17:03:58.980773 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.980781 | orchestrator | 2025-03-26 17:03:58.980789 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-03-26 17:03:58.980797 | orchestrator | Wednesday 26 March 2025 17:00:00 +0000 (0:00:03.962) 0:05:26.790 ******* 2025-03-26 17:03:58.980805 | orchestrator | changed: [testbed-node-0] 2025-03-26 17:03:58.980812 | orchestrator | changed: [testbed-node-1] 2025-03-26 17:03:58.980820 | orchestrator | changed: [testbed-node-2] 2025-03-26 17:03:58.980865 | orchestrator | 2025-03-26 17:03:58.980875 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-03-26 17:03:58.980883 | orchestrator | Wednesday 26 March 2025 17:00:03 +0000 (0:00:02.500) 0:05:29.291 ******* 2025-03-26 17:03:58.980891 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.980899 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.980907 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.980915 | orchestrator | 2025-03-26 17:03:58.980923 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-03-26 17:03:58.980931 | orchestrator | Wednesday 26 March 2025 17:00:05 +0000 (0:00:02.260) 0:05:31.551 ******* 2025-03-26 17:03:58.980939 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.980947 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.980955 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.980963 | orchestrator | 2025-03-26 17:03:58.980971 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-03-26 17:03:58.980979 | orchestrator | Wednesday 26 March 2025 17:00:05 +0000 (0:00:00.356) 0:05:31.908 ******* 2025-03-26 17:03:58.980987 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 17:03:58.980995 | orchestrator | 2025-03-26 17:03:58.981003 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-03-26 17:03:58.981011 | orchestrator | Wednesday 26 March 2025 17:00:07 +0000 (0:00:01.641) 0:05:33.549 ******* 2025-03-26 17:03:58.981019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-03-26 17:03:58.981031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-03-26 17:03:58.981045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-03-26 17:03:58.981054 | orchestrator | 2025-03-26 17:03:58.981062 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-03-26 17:03:58.981069 | orchestrator | Wednesday 26 March 2025 17:00:09 +0000 (0:00:01.821) 0:05:35.371 ******* 2025-03-26 17:03:58.981078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-03-26 17:03:58.981086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-03-26 17:03:58.981095 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.981106 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.981120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-03-26 17:03:58.981129 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.981137 | orchestrator | 2025-03-26 17:03:58.981145 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-03-26 17:03:58.981153 | orchestrator | Wednesday 26 March 2025 17:00:10 +0000 (0:00:00.672) 0:05:36.044 ******* 2025-03-26 17:03:58.981161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-03-26 17:03:58.981169 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.981177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-03-26 17:03:58.981185 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.981193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-03-26 17:03:58.981201 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.981209 | orchestrator | 2025-03-26 17:03:58.981221 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-03-26 17:03:58.981229 | orchestrator | Wednesday 26 March 2025 17:00:11 +0000 (0:00:00.994) 0:05:37.038 ******* 2025-03-26 17:03:58.981237 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.981245 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.981253 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.981261 | orchestrator | 2025-03-26 17:03:58.981269 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-03-26 17:03:58.981277 | orchestrator | Wednesday 26 March 2025 17:00:11 +0000 (0:00:00.744) 0:05:37.782 ******* 2025-03-26 17:03:58.981285 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.981293 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.981301 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.981309 | orchestrator | 2025-03-26 17:03:58.981317 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-03-26 17:03:58.981325 | orchestrator | Wednesday 26 March 2025 17:00:13 +0000 (0:00:02.009) 0:05:39.792 ******* 2025-03-26 17:03:58.981333 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.981340 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.981348 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.981356 | orchestrator | 2025-03-26 17:03:58.981364 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-03-26 17:03:58.981372 | orchestrator | Wednesday 26 March 2025 17:00:14 +0000 (0:00:00.327) 0:05:40.119 ******* 2025-03-26 17:03:58.981380 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 17:03:58.981388 | orchestrator | 2025-03-26 17:03:58.981394 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-03-26 17:03:58.981404 | orchestrator | Wednesday 26 March 2025 17:00:15 +0000 (0:00:01.689) 0:05:41.809 ******* 2025-03-26 17:03:58.981411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-26 17:03:58.981419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-26 17:03:58.981427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.981438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.981542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.981565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.981573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.981580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-26 17:03:58.981593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.981600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.981616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-26 17:03:58.981625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-26 17:03:58.981633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.981641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-26 17:03:58.981649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-26 17:03:58.981659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.981667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-26 17:03:58.981678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-26 17:03:58.981689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.981697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.981704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-26 17:03:58.981714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 17:03:58.981722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.981732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-26 17:03:58.981740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 17:03:58.981747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.981760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-26 17:03:58.981767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-26 17:03:58.981777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.981788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-26 17:03:58.981800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-26 17:03:58.981808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.981815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-26 17:03:58.981825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.981885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-03-26 17:03:58.981893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.981905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.981912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.981920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-26 17:03:58.981933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.981941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-26 17:03:58.981948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-26 17:03:58.981960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.981968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-26 17:03:58.981975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.981985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 17:03:58.981997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-26 17:03:58.982005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.982043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-26 17:03:58.982053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-26 17:03:58.982061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.982072 | orchestrator | 2025-03-26 17:03:58.982079 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-03-26 17:03:58.982103 | orchestrator | Wednesday 26 March 2025 17:00:21 +0000 (0:00:05.903) 0:05:47.712 ******* 2025-03-26 17:03:58.982115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-26 17:03:58.982122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.982135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.982142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.982150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-26 17:03:58.982164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.982174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-26 17:03:58.982183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-26 17:03:58.982191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-26 17:03:58.982205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.982213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.982228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.982236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-26 17:03:58.982245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.982257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.982265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-26 17:03:58.982276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 17:03:58.982288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-26 17:03:58.982296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.982305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.982313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-26 17:03:58.982326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-26 17:03:58.982337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-26 17:03:58.982346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-26 17:03:58.982363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.982372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.982379 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.982388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-26 17:03:58.982401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.982413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 17:03:58.982421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-26 17:03:58.982433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.982442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-26 17:03:58.982455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-26 17:03:58.982463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.982476 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.982484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-03-26 17:03:58.982496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.982505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.982518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.982526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-03-26 17:03:58.982539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.982547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-26 17:03:58.982557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-26 17:03:58.982565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.982572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-03-26 17:03:58.982584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.982596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 17:03:58.982603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-03-26 17:03:58.982611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.982621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-03-26 17:03:58.982634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-03-26 17:03:58.982641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.982653 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.982660 | orchestrator | 2025-03-26 17:03:58.982676 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-03-26 17:03:58.982684 | orchestrator | Wednesday 26 March 2025 17:00:24 +0000 (0:00:02.309) 0:05:50.022 ******* 2025-03-26 17:03:58.982691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-03-26 17:03:58.982698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-03-26 17:03:58.982705 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.982715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-03-26 17:03:58.982722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-03-26 17:03:58.982729 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.982736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-03-26 17:03:58.982743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-03-26 17:03:58.982750 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.982760 | orchestrator | 2025-03-26 17:03:58.982767 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-03-26 17:03:58.982777 | orchestrator | Wednesday 26 March 2025 17:00:26 +0000 (0:00:02.391) 0:05:52.413 ******* 2025-03-26 17:03:58.982784 | orchestrator | changed: [testbed-node-0] 2025-03-26 17:03:58.982791 | orchestrator | changed: [testbed-node-1] 2025-03-26 17:03:58.982802 | orchestrator | changed: [testbed-node-2] 2025-03-26 17:03:58.982809 | orchestrator | 2025-03-26 17:03:58.982816 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-03-26 17:03:58.982823 | orchestrator | Wednesday 26 March 2025 17:00:27 +0000 (0:00:01.533) 0:05:53.947 ******* 2025-03-26 17:03:58.982869 | orchestrator | changed: [testbed-node-0] 2025-03-26 17:03:58.982876 | orchestrator | changed: [testbed-node-1] 2025-03-26 17:03:58.982883 | orchestrator | changed: [testbed-node-2] 2025-03-26 17:03:58.982890 | orchestrator | 2025-03-26 17:03:58.982897 | orchestrator | TASK [include_role : placement] ************************************************ 2025-03-26 17:03:58.982904 | orchestrator | Wednesday 26 March 2025 17:00:30 +0000 (0:00:02.642) 0:05:56.589 ******* 2025-03-26 17:03:58.982911 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 17:03:58.982918 | orchestrator | 2025-03-26 17:03:58.982925 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-03-26 17:03:58.982932 | orchestrator | Wednesday 26 March 2025 17:00:32 +0000 (0:00:01.935) 0:05:58.524 ******* 2025-03-26 17:03:58.982939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-26 17:03:58.982951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-26 17:03:58.982965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-03-26 17:03:58.982973 | orchestrator | 2025-03-26 17:03:58.982980 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-03-26 17:03:58.982987 | orchestrator | Wednesday 26 March 2025 17:00:37 +0000 (0:00:04.949) 0:06:03.474 ******* 2025-03-26 17:03:58.982998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-03-26 17:03:58.983006 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.983013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-03-26 17:03:58.983024 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.983032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-03-26 17:03:58.983039 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.983046 | orchestrator | 2025-03-26 17:03:58.983053 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-03-26 17:03:58.983059 | orchestrator | Wednesday 26 March 2025 17:00:38 +0000 (0:00:00.568) 0:06:04.042 ******* 2025-03-26 17:03:58.983066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-03-26 17:03:58.983074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-03-26 17:03:58.983081 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.983088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-03-26 17:03:58.983095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-03-26 17:03:58.983102 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.983109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-03-26 17:03:58.983116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-03-26 17:03:58.983123 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.983130 | orchestrator | 2025-03-26 17:03:58.983137 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-03-26 17:03:58.983147 | orchestrator | Wednesday 26 March 2025 17:00:39 +0000 (0:00:01.374) 0:06:05.416 ******* 2025-03-26 17:03:58.983154 | orchestrator | changed: [testbed-node-0] 2025-03-26 17:03:58.983161 | orchestrator | changed: [testbed-node-1] 2025-03-26 17:03:58.983168 | orchestrator | changed: [testbed-node-2] 2025-03-26 17:03:58.983179 | orchestrator | 2025-03-26 17:03:58.983186 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-03-26 17:03:58.983193 | orchestrator | Wednesday 26 March 2025 17:00:41 +0000 (0:00:01.557) 0:06:06.973 ******* 2025-03-26 17:03:58.983199 | orchestrator | changed: [testbed-node-0] 2025-03-26 17:03:58.983206 | orchestrator | changed: [testbed-node-1] 2025-03-26 17:03:58.983213 | orchestrator | changed: [testbed-node-2] 2025-03-26 17:03:58.983220 | orchestrator | 2025-03-26 17:03:58.983227 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-03-26 17:03:58.983234 | orchestrator | Wednesday 26 March 2025 17:00:43 +0000 (0:00:02.309) 0:06:09.283 ******* 2025-03-26 17:03:58.983240 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 17:03:58.983247 | orchestrator | 2025-03-26 17:03:58.983254 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-03-26 17:03:58.983261 | orchestrator | Wednesday 26 March 2025 17:00:45 +0000 (0:00:01.802) 0:06:11.086 ******* 2025-03-26 17:03:58.983274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-26 17:03:58.983282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.983290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.983301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-26 17:03:58.983312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.983320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.983332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-03-26 17:03:58.983340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.983350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.983361 | orchestrator | 2025-03-26 17:03:58.983368 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-03-26 17:03:58.983375 | orchestrator | Wednesday 26 March 2025 17:00:51 +0000 (0:00:06.568) 0:06:17.654 ******* 2025-03-26 17:03:58.983383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-03-26 17:03:58.983396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.983403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.983409 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.983416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-03-26 17:03:58.983429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.983436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.983442 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.983453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-03-26 17:03:58.983460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.983466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.983477 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.983483 | orchestrator | 2025-03-26 17:03:58.983489 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-03-26 17:03:58.983495 | orchestrator | Wednesday 26 March 2025 17:00:52 +0000 (0:00:01.202) 0:06:18.857 ******* 2025-03-26 17:03:58.983501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-03-26 17:03:58.983511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-03-26 17:03:58.983518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-03-26 17:03:58.983524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-03-26 17:03:58.983530 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.983537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-03-26 17:03:58.983543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-03-26 17:03:58.983549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-03-26 17:03:58.983556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-03-26 17:03:58.983562 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.983568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-03-26 17:03:58.983574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-03-26 17:03:58.983580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-03-26 17:03:58.983587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-03-26 17:03:58.983593 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.983599 | orchestrator | 2025-03-26 17:03:58.983605 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-03-26 17:03:58.983611 | orchestrator | Wednesday 26 March 2025 17:00:54 +0000 (0:00:01.586) 0:06:20.443 ******* 2025-03-26 17:03:58.983621 | orchestrator | changed: [testbed-node-0] 2025-03-26 17:03:58.983627 | orchestrator | changed: [testbed-node-1] 2025-03-26 17:03:58.983633 | orchestrator | changed: [testbed-node-2] 2025-03-26 17:03:58.983639 | orchestrator | 2025-03-26 17:03:58.983645 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-03-26 17:03:58.983651 | orchestrator | Wednesday 26 March 2025 17:00:56 +0000 (0:00:01.697) 0:06:22.141 ******* 2025-03-26 17:03:58.983657 | orchestrator | changed: [testbed-node-0] 2025-03-26 17:03:58.983664 | orchestrator | changed: [testbed-node-1] 2025-03-26 17:03:58.983670 | orchestrator | changed: [testbed-node-2] 2025-03-26 17:03:58.983676 | orchestrator | 2025-03-26 17:03:58.983682 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-03-26 17:03:58.983688 | orchestrator | Wednesday 26 March 2025 17:00:58 +0000 (0:00:02.718) 0:06:24.859 ******* 2025-03-26 17:03:58.983694 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 17:03:58.983700 | orchestrator | 2025-03-26 17:03:58.983707 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-03-26 17:03:58.983713 | orchestrator | Wednesday 26 March 2025 17:01:00 +0000 (0:00:01.853) 0:06:26.713 ******* 2025-03-26 17:03:58.983719 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-03-26 17:03:58.983726 | orchestrator | 2025-03-26 17:03:58.983734 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-03-26 17:03:58.983741 | orchestrator | Wednesday 26 March 2025 17:01:02 +0000 (0:00:01.396) 0:06:28.110 ******* 2025-03-26 17:03:58.983750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-03-26 17:03:58.983757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-03-26 17:03:58.983763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-03-26 17:03:58.983770 | orchestrator | 2025-03-26 17:03:58.983776 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-03-26 17:03:58.983782 | orchestrator | Wednesday 26 March 2025 17:01:08 +0000 (0:00:06.472) 0:06:34.582 ******* 2025-03-26 17:03:58.983793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-26 17:03:58.983803 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.983809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-26 17:03:58.983816 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.983822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-26 17:03:58.983839 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.983845 | orchestrator | 2025-03-26 17:03:58.983851 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-03-26 17:03:58.983857 | orchestrator | Wednesday 26 March 2025 17:01:10 +0000 (0:00:02.209) 0:06:36.792 ******* 2025-03-26 17:03:58.983864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-03-26 17:03:58.983870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-03-26 17:03:58.983877 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.983883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-03-26 17:03:58.983895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-03-26 17:03:58.983902 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.983908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-03-26 17:03:58.983914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-03-26 17:03:58.983920 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.983926 | orchestrator | 2025-03-26 17:03:58.983933 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-03-26 17:03:58.983939 | orchestrator | Wednesday 26 March 2025 17:01:12 +0000 (0:00:02.072) 0:06:38.864 ******* 2025-03-26 17:03:58.983945 | orchestrator | changed: [testbed-node-0] 2025-03-26 17:03:58.983951 | orchestrator | changed: [testbed-node-1] 2025-03-26 17:03:58.983957 | orchestrator | changed: [testbed-node-2] 2025-03-26 17:03:58.983963 | orchestrator | 2025-03-26 17:03:58.983969 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-03-26 17:03:58.983975 | orchestrator | Wednesday 26 March 2025 17:01:16 +0000 (0:00:03.808) 0:06:42.673 ******* 2025-03-26 17:03:58.983986 | orchestrator | changed: [testbed-node-0] 2025-03-26 17:03:58.983992 | orchestrator | changed: [testbed-node-1] 2025-03-26 17:03:58.983998 | orchestrator | changed: [testbed-node-2] 2025-03-26 17:03:58.984004 | orchestrator | 2025-03-26 17:03:58.984010 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-03-26 17:03:58.984016 | orchestrator | Wednesday 26 March 2025 17:01:20 +0000 (0:00:03.909) 0:06:46.582 ******* 2025-03-26 17:03:58.984025 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-03-26 17:03:58.984032 | orchestrator | 2025-03-26 17:03:58.984038 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-03-26 17:03:58.984044 | orchestrator | Wednesday 26 March 2025 17:01:22 +0000 (0:00:01.476) 0:06:48.058 ******* 2025-03-26 17:03:58.984050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-26 17:03:58.984057 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.984063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-26 17:03:58.984069 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.984075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-26 17:03:58.984082 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.984088 | orchestrator | 2025-03-26 17:03:58.984094 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-03-26 17:03:58.984100 | orchestrator | Wednesday 26 March 2025 17:01:23 +0000 (0:00:01.748) 0:06:49.807 ******* 2025-03-26 17:03:58.984114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-26 17:03:58.984121 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.984128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-26 17:03:58.984138 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.984144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-03-26 17:03:58.984151 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.984157 | orchestrator | 2025-03-26 17:03:58.984163 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-03-26 17:03:58.984169 | orchestrator | Wednesday 26 March 2025 17:01:26 +0000 (0:00:02.346) 0:06:52.153 ******* 2025-03-26 17:03:58.984175 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.984181 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.984187 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.984196 | orchestrator | 2025-03-26 17:03:58.984203 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-03-26 17:03:58.984209 | orchestrator | Wednesday 26 March 2025 17:01:28 +0000 (0:00:02.295) 0:06:54.448 ******* 2025-03-26 17:03:58.984215 | orchestrator | ok: [testbed-node-0] 2025-03-26 17:03:58.984221 | orchestrator | ok: [testbed-node-1] 2025-03-26 17:03:58.984227 | orchestrator | ok: [testbed-node-2] 2025-03-26 17:03:58.984234 | orchestrator | 2025-03-26 17:03:58.984240 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-03-26 17:03:58.984246 | orchestrator | Wednesday 26 March 2025 17:01:31 +0000 (0:00:03.088) 0:06:57.537 ******* 2025-03-26 17:03:58.984252 | orchestrator | ok: [testbed-node-0] 2025-03-26 17:03:58.984258 | orchestrator | ok: [testbed-node-1] 2025-03-26 17:03:58.984264 | orchestrator | ok: [testbed-node-2] 2025-03-26 17:03:58.984270 | orchestrator | 2025-03-26 17:03:58.984328 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-03-26 17:03:58.984335 | orchestrator | Wednesday 26 March 2025 17:01:35 +0000 (0:00:03.655) 0:07:01.193 ******* 2025-03-26 17:03:58.984341 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-03-26 17:03:58.984348 | orchestrator | 2025-03-26 17:03:58.984354 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-03-26 17:03:58.984360 | orchestrator | Wednesday 26 March 2025 17:01:36 +0000 (0:00:01.758) 0:07:02.952 ******* 2025-03-26 17:03:58.984367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-03-26 17:03:58.984373 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.984380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-03-26 17:03:58.984390 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.984399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-03-26 17:03:58.984490 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.984499 | orchestrator | 2025-03-26 17:03:58.984506 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-03-26 17:03:58.984513 | orchestrator | Wednesday 26 March 2025 17:01:38 +0000 (0:00:01.923) 0:07:04.875 ******* 2025-03-26 17:03:58.984520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-03-26 17:03:58.984527 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.984534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-03-26 17:03:58.984540 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.984547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-03-26 17:03:58.984554 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.984561 | orchestrator | 2025-03-26 17:03:58.984567 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-03-26 17:03:58.984574 | orchestrator | Wednesday 26 March 2025 17:01:40 +0000 (0:00:01.852) 0:07:06.727 ******* 2025-03-26 17:03:58.984581 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.984587 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.984594 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.984601 | orchestrator | 2025-03-26 17:03:58.984607 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-03-26 17:03:58.984614 | orchestrator | Wednesday 26 March 2025 17:01:43 +0000 (0:00:02.341) 0:07:09.068 ******* 2025-03-26 17:03:58.984621 | orchestrator | ok: [testbed-node-0] 2025-03-26 17:03:58.984627 | orchestrator | ok: [testbed-node-2] 2025-03-26 17:03:58.984634 | orchestrator | ok: [testbed-node-1] 2025-03-26 17:03:58.984640 | orchestrator | 2025-03-26 17:03:58.984647 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-03-26 17:03:58.984654 | orchestrator | Wednesday 26 March 2025 17:01:46 +0000 (0:00:03.163) 0:07:12.232 ******* 2025-03-26 17:03:58.984660 | orchestrator | ok: [testbed-node-0] 2025-03-26 17:03:58.984667 | orchestrator | ok: [testbed-node-1] 2025-03-26 17:03:58.984678 | orchestrator | ok: [testbed-node-2] 2025-03-26 17:03:58.984685 | orchestrator | 2025-03-26 17:03:58.984692 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-03-26 17:03:58.984704 | orchestrator | Wednesday 26 March 2025 17:01:50 +0000 (0:00:03.949) 0:07:16.181 ******* 2025-03-26 17:03:58.984711 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 17:03:58.984717 | orchestrator | 2025-03-26 17:03:58.984724 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-03-26 17:03:58.984731 | orchestrator | Wednesday 26 March 2025 17:01:52 +0000 (0:00:01.837) 0:07:18.018 ******* 2025-03-26 17:03:58.984755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-03-26 17:03:58.984763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-03-26 17:03:58.984773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-03-26 17:03:58.984781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-03-26 17:03:58.984787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.984798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-03-26 17:03:58.984805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-03-26 17:03:58.984839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-03-26 17:03:58.984848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-03-26 17:03:58.984854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.984861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-03-26 17:03:58.984871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-03-26 17:03:58.984878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-03-26 17:03:58.984900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-03-26 17:03:58.984907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.984913 | orchestrator | 2025-03-26 17:03:58.984920 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-03-26 17:03:58.984926 | orchestrator | Wednesday 26 March 2025 17:01:57 +0000 (0:00:05.524) 0:07:23.543 ******* 2025-03-26 17:03:58.984933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-03-26 17:03:58.984943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-03-26 17:03:58.984949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-03-26 17:03:58.984956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-03-26 17:03:58.984977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.984984 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.984991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-03-26 17:03:58.984997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-03-26 17:03:58.985007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-03-26 17:03:58.985014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-03-26 17:03:58.985020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.985027 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.985048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-03-26 17:03:58.985056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-03-26 17:03:58.985062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-03-26 17:03:58.985072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-03-26 17:03:58.985079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-03-26 17:03:58.985085 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.985092 | orchestrator | 2025-03-26 17:03:58.985098 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-03-26 17:03:58.985104 | orchestrator | Wednesday 26 March 2025 17:01:58 +0000 (0:00:01.034) 0:07:24.577 ******* 2025-03-26 17:03:58.985110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-03-26 17:03:58.985118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-03-26 17:03:58.985126 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.985133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-03-26 17:03:58.985140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-03-26 17:03:58.985147 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.985169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-03-26 17:03:58.985178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-03-26 17:03:58.985185 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.985192 | orchestrator | 2025-03-26 17:03:58.985199 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-03-26 17:03:58.985206 | orchestrator | Wednesday 26 March 2025 17:02:00 +0000 (0:00:01.469) 0:07:26.047 ******* 2025-03-26 17:03:58.985213 | orchestrator | changed: [testbed-node-0] 2025-03-26 17:03:58.985219 | orchestrator | changed: [testbed-node-1] 2025-03-26 17:03:58.985226 | orchestrator | changed: [testbed-node-2] 2025-03-26 17:03:58.985233 | orchestrator | 2025-03-26 17:03:58.985240 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-03-26 17:03:58.985247 | orchestrator | Wednesday 26 March 2025 17:02:01 +0000 (0:00:01.643) 0:07:27.690 ******* 2025-03-26 17:03:58.985257 | orchestrator | changed: [testbed-node-0] 2025-03-26 17:03:58.985265 | orchestrator | changed: [testbed-node-1] 2025-03-26 17:03:58.985271 | orchestrator | changed: [testbed-node-2] 2025-03-26 17:03:58.985278 | orchestrator | 2025-03-26 17:03:58.985285 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-03-26 17:03:58.985292 | orchestrator | Wednesday 26 March 2025 17:02:04 +0000 (0:00:02.690) 0:07:30.381 ******* 2025-03-26 17:03:58.985299 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 17:03:58.985306 | orchestrator | 2025-03-26 17:03:58.985313 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-03-26 17:03:58.985319 | orchestrator | Wednesday 26 March 2025 17:02:06 +0000 (0:00:01.936) 0:07:32.317 ******* 2025-03-26 17:03:58.985326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-03-26 17:03:58.985334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-03-26 17:03:58.985341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-03-26 17:03:58.985365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-03-26 17:03:58.985377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-03-26 17:03:58.985385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-03-26 17:03:58.985392 | orchestrator | 2025-03-26 17:03:58.985399 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-03-26 17:03:58.985406 | orchestrator | Wednesday 26 March 2025 17:02:13 +0000 (0:00:07.306) 0:07:39.623 ******* 2025-03-26 17:03:58.985428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-03-26 17:03:58.985436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-03-26 17:03:58.985446 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.985454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-03-26 17:03:58.985461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-03-26 17:03:58.985469 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.985476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-03-26 17:03:58.985497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-03-26 17:03:58.985508 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.985515 | orchestrator | 2025-03-26 17:03:58.985521 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-03-26 17:03:58.985527 | orchestrator | Wednesday 26 March 2025 17:02:14 +0000 (0:00:00.976) 0:07:40.600 ******* 2025-03-26 17:03:58.985534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-03-26 17:03:58.985540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-03-26 17:03:58.985546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-03-26 17:03:58.985553 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.985559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-03-26 17:03:58.985565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-03-26 17:03:58.985572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-03-26 17:03:58.985578 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.985587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-03-26 17:03:58.985593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-03-26 17:03:58.985600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-03-26 17:03:58.985606 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.985612 | orchestrator | 2025-03-26 17:03:58.985618 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-03-26 17:03:58.985625 | orchestrator | Wednesday 26 March 2025 17:02:16 +0000 (0:00:01.796) 0:07:42.397 ******* 2025-03-26 17:03:58.985631 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.985637 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.985647 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.985653 | orchestrator | 2025-03-26 17:03:58.985659 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-03-26 17:03:58.985665 | orchestrator | Wednesday 26 March 2025 17:02:16 +0000 (0:00:00.499) 0:07:42.896 ******* 2025-03-26 17:03:58.985671 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.985678 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.985684 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.985690 | orchestrator | 2025-03-26 17:03:58.985697 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-03-26 17:03:58.985703 | orchestrator | Wednesday 26 March 2025 17:02:18 +0000 (0:00:01.851) 0:07:44.747 ******* 2025-03-26 17:03:58.985724 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 17:03:58.985731 | orchestrator | 2025-03-26 17:03:58.985738 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-03-26 17:03:58.985744 | orchestrator | Wednesday 26 March 2025 17:02:20 +0000 (0:00:02.083) 0:07:46.830 ******* 2025-03-26 17:03:58.985750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-03-26 17:03:58.985757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-26 17:03:58.985764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 17:03:58.985770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 17:03:58.985777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-26 17:03:58.985787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-03-26 17:03:58.985807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-26 17:03:58.985815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 17:03:58.985822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 17:03:58.985838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-26 17:03:58.985845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-03-26 17:03:58.985852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-26 17:03:58.985862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 17:03:58.985883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 17:03:58.985891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-26 17:03:58.985898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-03-26 17:03:58.985905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-26 17:03:58.985912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 17:03:58.985924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 17:03:58.985945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-26 17:03:58.985953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 17:03:58.985960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-03-26 17:03:58.985967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-26 17:03:58.985977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 17:03:58.985984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 17:03:58.985990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-26 17:03:58.985999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 17:03:58.986006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-03-26 17:03:58.986013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-26 17:03:58.986043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 17:03:58.986050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 17:03:58.986065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-26 17:03:58.986075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 17:03:58.986082 | orchestrator | 2025-03-26 17:03:58.986088 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-03-26 17:03:58.986095 | orchestrator | Wednesday 26 March 2025 17:02:26 +0000 (0:00:05.904) 0:07:52.734 ******* 2025-03-26 17:03:58.986101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-26 17:03:58.986108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-26 17:03:58.986114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 17:03:58.986124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 17:03:58.986131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-26 17:03:58.986145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-26 17:03:58.986152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-26 17:03:58.986158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 17:03:58.986165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 17:03:58.986175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-26 17:03:58.986182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 17:03:58.986188 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.986202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-26 17:03:58.986209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-26 17:03:58.986216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 17:03:58.986222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-03-26 17:03:58.986232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-03-26 17:03:58.986239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 17:03:58.986250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 17:03:58.986259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-26 17:03:58.986266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 17:03:58.986272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-26 17:03:58.986287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-03-26 17:03:58.986294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-26 17:03:58.986300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-03-26 17:03:58.986309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 17:03:58.986315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-03-26 17:03:58.986322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 17:03:58.986332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 17:03:58.986343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-26 17:03:58.986350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 17:03:58.986356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 17:03:58.986362 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.986372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-03-26 17:03:58.986378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-03-26 17:03:58.986385 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.986391 | orchestrator | 2025-03-26 17:03:58.986397 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-03-26 17:03:58.986411 | orchestrator | Wednesday 26 March 2025 17:02:28 +0000 (0:00:01.664) 0:07:54.399 ******* 2025-03-26 17:03:58.986418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-03-26 17:03:58.986424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-03-26 17:03:58.986431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-03-26 17:03:58.986437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-03-26 17:03:58.986443 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.986450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-03-26 17:03:58.986456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-03-26 17:03:58.986463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-03-26 17:03:58.986469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-03-26 17:03:58.986475 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.986481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-03-26 17:03:58.986490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-03-26 17:03:58.986497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-03-26 17:03:58.986505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-03-26 17:03:58.986512 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.986521 | orchestrator | 2025-03-26 17:03:58.986527 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-03-26 17:03:58.986534 | orchestrator | Wednesday 26 March 2025 17:02:30 +0000 (0:00:01.986) 0:07:56.386 ******* 2025-03-26 17:03:58.986540 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.986553 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.986559 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.986565 | orchestrator | 2025-03-26 17:03:58.986572 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-03-26 17:03:58.986578 | orchestrator | Wednesday 26 March 2025 17:02:31 +0000 (0:00:00.967) 0:07:57.353 ******* 2025-03-26 17:03:58.986584 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.986590 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.986596 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.986602 | orchestrator | 2025-03-26 17:03:58.986608 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-03-26 17:03:58.986614 | orchestrator | Wednesday 26 March 2025 17:02:33 +0000 (0:00:02.110) 0:07:59.463 ******* 2025-03-26 17:03:58.986621 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 17:03:58.986627 | orchestrator | 2025-03-26 17:03:58.986633 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-03-26 17:03:58.986639 | orchestrator | Wednesday 26 March 2025 17:02:35 +0000 (0:00:02.317) 0:08:01.781 ******* 2025-03-26 17:03:58.986650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-26 17:03:58.986657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-26 17:03:58.986666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-03-26 17:03:58.986681 | orchestrator | 2025-03-26 17:03:58.986688 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-03-26 17:03:58.986694 | orchestrator | Wednesday 26 March 2025 17:02:39 +0000 (0:00:03.637) 0:08:05.419 ******* 2025-03-26 17:03:58.986700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-03-26 17:03:58.986707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-03-26 17:03:58.986714 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.986720 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.986726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-03-26 17:03:58.986733 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.986739 | orchestrator | 2025-03-26 17:03:58.986745 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-03-26 17:03:58.986751 | orchestrator | Wednesday 26 March 2025 17:02:39 +0000 (0:00:00.464) 0:08:05.883 ******* 2025-03-26 17:03:58.986757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-03-26 17:03:58.986767 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.986774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-03-26 17:03:58.986780 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.986788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-03-26 17:03:58.986795 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.986801 | orchestrator | 2025-03-26 17:03:58.986807 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-03-26 17:03:58.986813 | orchestrator | Wednesday 26 March 2025 17:02:41 +0000 (0:00:01.478) 0:08:07.362 ******* 2025-03-26 17:03:58.986819 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.986826 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.986866 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.986873 | orchestrator | 2025-03-26 17:03:58.986879 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-03-26 17:03:58.986885 | orchestrator | Wednesday 26 March 2025 17:02:41 +0000 (0:00:00.500) 0:08:07.863 ******* 2025-03-26 17:03:58.986891 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.986897 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.986904 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.986910 | orchestrator | 2025-03-26 17:03:58.986916 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-03-26 17:03:58.986922 | orchestrator | Wednesday 26 March 2025 17:02:43 +0000 (0:00:02.001) 0:08:09.864 ******* 2025-03-26 17:03:58.986928 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-03-26 17:03:58.986934 | orchestrator | 2025-03-26 17:03:58.986940 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-03-26 17:03:58.986947 | orchestrator | Wednesday 26 March 2025 17:02:46 +0000 (0:00:02.315) 0:08:12.179 ******* 2025-03-26 17:03:58.986953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-03-26 17:03:58.986965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-03-26 17:03:58.986976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-03-26 17:03:58.986986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-03-26 17:03:58.986993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-03-26 17:03:58.987000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-03-26 17:03:58.987006 | orchestrator | 2025-03-26 17:03:58.987012 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-03-26 17:03:58.987019 | orchestrator | Wednesday 26 March 2025 17:02:55 +0000 (0:00:09.193) 0:08:21.372 ******* 2025-03-26 17:03:58.987033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-03-26 17:03:58.987043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-03-26 17:03:58.987050 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.987056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-03-26 17:03:58.987067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-03-26 17:03:58.987074 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.987080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-03-26 17:03:58.987095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-03-26 17:03:58.987102 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.987108 | orchestrator | 2025-03-26 17:03:58.987115 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-03-26 17:03:58.987121 | orchestrator | Wednesday 26 March 2025 17:02:56 +0000 (0:00:01.448) 0:08:22.821 ******* 2025-03-26 17:03:58.987127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-03-26 17:03:58.987133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-03-26 17:03:58.987140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-03-26 17:03:58.987146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-03-26 17:03:58.987152 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.987159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-03-26 17:03:58.987165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-03-26 17:03:58.987172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-03-26 17:03:58.987178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-03-26 17:03:58.987188 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.987194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-03-26 17:03:58.987200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-03-26 17:03:58.987207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-03-26 17:03:58.987213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-03-26 17:03:58.987219 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.987225 | orchestrator | 2025-03-26 17:03:58.987231 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-03-26 17:03:58.987237 | orchestrator | Wednesday 26 March 2025 17:02:58 +0000 (0:00:01.808) 0:08:24.629 ******* 2025-03-26 17:03:58.987244 | orchestrator | changed: [testbed-node-0] 2025-03-26 17:03:58.987250 | orchestrator | changed: [testbed-node-1] 2025-03-26 17:03:58.987256 | orchestrator | changed: [testbed-node-2] 2025-03-26 17:03:58.987262 | orchestrator | 2025-03-26 17:03:58.987268 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-03-26 17:03:58.987274 | orchestrator | Wednesday 26 March 2025 17:03:00 +0000 (0:00:01.671) 0:08:26.301 ******* 2025-03-26 17:03:58.987281 | orchestrator | changed: [testbed-node-0] 2025-03-26 17:03:58.987287 | orchestrator | changed: [testbed-node-1] 2025-03-26 17:03:58.987293 | orchestrator | changed: [testbed-node-2] 2025-03-26 17:03:58.987299 | orchestrator | 2025-03-26 17:03:58.987305 | orchestrator | TASK [include_role : swift] **************************************************** 2025-03-26 17:03:58.987315 | orchestrator | Wednesday 26 March 2025 17:03:03 +0000 (0:00:03.164) 0:08:29.465 ******* 2025-03-26 17:03:58.987321 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.987327 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.987335 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.987341 | orchestrator | 2025-03-26 17:03:58.987347 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-03-26 17:03:58.987352 | orchestrator | Wednesday 26 March 2025 17:03:04 +0000 (0:00:00.645) 0:08:30.110 ******* 2025-03-26 17:03:58.987358 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.987364 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.987370 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.987376 | orchestrator | 2025-03-26 17:03:58.987382 | orchestrator | TASK [include_role : trove] **************************************************** 2025-03-26 17:03:58.987387 | orchestrator | Wednesday 26 March 2025 17:03:04 +0000 (0:00:00.362) 0:08:30.473 ******* 2025-03-26 17:03:58.987393 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.987399 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.987405 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.987411 | orchestrator | 2025-03-26 17:03:58.987417 | orchestrator | TASK [include_role : venus] **************************************************** 2025-03-26 17:03:58.987423 | orchestrator | Wednesday 26 March 2025 17:03:05 +0000 (0:00:00.662) 0:08:31.135 ******* 2025-03-26 17:03:58.987428 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.987434 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.987440 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.987450 | orchestrator | 2025-03-26 17:03:58.987456 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-03-26 17:03:58.987461 | orchestrator | Wednesday 26 March 2025 17:03:05 +0000 (0:00:00.758) 0:08:31.894 ******* 2025-03-26 17:03:58.987467 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.987473 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.987479 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.987485 | orchestrator | 2025-03-26 17:03:58.987491 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-03-26 17:03:58.987496 | orchestrator | Wednesday 26 March 2025 17:03:06 +0000 (0:00:00.670) 0:08:32.565 ******* 2025-03-26 17:03:58.987502 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.987508 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.987514 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.987519 | orchestrator | 2025-03-26 17:03:58.987525 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-03-26 17:03:58.987531 | orchestrator | Wednesday 26 March 2025 17:03:07 +0000 (0:00:00.878) 0:08:33.443 ******* 2025-03-26 17:03:58.987537 | orchestrator | ok: [testbed-node-0] 2025-03-26 17:03:58.987543 | orchestrator | ok: [testbed-node-1] 2025-03-26 17:03:58.987549 | orchestrator | ok: [testbed-node-2] 2025-03-26 17:03:58.987555 | orchestrator | 2025-03-26 17:03:58.987561 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-03-26 17:03:58.987566 | orchestrator | Wednesday 26 March 2025 17:03:08 +0000 (0:00:01.046) 0:08:34.490 ******* 2025-03-26 17:03:58.987572 | orchestrator | ok: [testbed-node-0] 2025-03-26 17:03:58.987581 | orchestrator | ok: [testbed-node-1] 2025-03-26 17:03:58.987588 | orchestrator | ok: [testbed-node-2] 2025-03-26 17:03:58.987594 | orchestrator | 2025-03-26 17:03:58.987599 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-03-26 17:03:58.987605 | orchestrator | Wednesday 26 March 2025 17:03:08 +0000 (0:00:00.371) 0:08:34.861 ******* 2025-03-26 17:03:58.987611 | orchestrator | ok: [testbed-node-0] 2025-03-26 17:03:58.987617 | orchestrator | ok: [testbed-node-1] 2025-03-26 17:03:58.987623 | orchestrator | ok: [testbed-node-2] 2025-03-26 17:03:58.987629 | orchestrator | 2025-03-26 17:03:58.987634 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-03-26 17:03:58.987640 | orchestrator | Wednesday 26 March 2025 17:03:10 +0000 (0:00:01.581) 0:08:36.443 ******* 2025-03-26 17:03:58.987646 | orchestrator | ok: [testbed-node-0] 2025-03-26 17:03:58.987652 | orchestrator | ok: [testbed-node-1] 2025-03-26 17:03:58.987658 | orchestrator | ok: [testbed-node-2] 2025-03-26 17:03:58.987663 | orchestrator | 2025-03-26 17:03:58.987669 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-03-26 17:03:58.987675 | orchestrator | Wednesday 26 March 2025 17:03:11 +0000 (0:00:01.463) 0:08:37.907 ******* 2025-03-26 17:03:58.987681 | orchestrator | ok: [testbed-node-0] 2025-03-26 17:03:58.987687 | orchestrator | ok: [testbed-node-1] 2025-03-26 17:03:58.987692 | orchestrator | ok: [testbed-node-2] 2025-03-26 17:03:58.987698 | orchestrator | 2025-03-26 17:03:58.987704 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-03-26 17:03:58.987710 | orchestrator | Wednesday 26 March 2025 17:03:13 +0000 (0:00:01.347) 0:08:39.254 ******* 2025-03-26 17:03:58.987716 | orchestrator | changed: [testbed-node-0] 2025-03-26 17:03:58.987721 | orchestrator | changed: [testbed-node-2] 2025-03-26 17:03:58.987727 | orchestrator | changed: [testbed-node-1] 2025-03-26 17:03:58.987733 | orchestrator | 2025-03-26 17:03:58.987739 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-03-26 17:03:58.987745 | orchestrator | Wednesday 26 March 2025 17:03:24 +0000 (0:00:10.946) 0:08:50.200 ******* 2025-03-26 17:03:58.987750 | orchestrator | ok: [testbed-node-0] 2025-03-26 17:03:58.987756 | orchestrator | ok: [testbed-node-1] 2025-03-26 17:03:58.987762 | orchestrator | ok: [testbed-node-2] 2025-03-26 17:03:58.987768 | orchestrator | 2025-03-26 17:03:58.987773 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-03-26 17:03:58.987782 | orchestrator | Wednesday 26 March 2025 17:03:25 +0000 (0:00:01.152) 0:08:51.352 ******* 2025-03-26 17:03:58.987788 | orchestrator | changed: [testbed-node-0] 2025-03-26 17:03:58.987794 | orchestrator | changed: [testbed-node-1] 2025-03-26 17:03:58.987800 | orchestrator | changed: [testbed-node-2] 2025-03-26 17:03:58.987806 | orchestrator | 2025-03-26 17:03:58.987812 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-03-26 17:03:58.987818 | orchestrator | Wednesday 26 March 2025 17:03:37 +0000 (0:00:12.186) 0:09:03.539 ******* 2025-03-26 17:03:58.987823 | orchestrator | ok: [testbed-node-0] 2025-03-26 17:03:58.987838 | orchestrator | ok: [testbed-node-1] 2025-03-26 17:03:58.987844 | orchestrator | ok: [testbed-node-2] 2025-03-26 17:03:58.987849 | orchestrator | 2025-03-26 17:03:58.987855 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-03-26 17:03:58.987861 | orchestrator | Wednesday 26 March 2025 17:03:39 +0000 (0:00:01.451) 0:09:04.990 ******* 2025-03-26 17:03:58.987867 | orchestrator | changed: [testbed-node-0] 2025-03-26 17:03:58.987873 | orchestrator | changed: [testbed-node-2] 2025-03-26 17:03:58.987878 | orchestrator | changed: [testbed-node-1] 2025-03-26 17:03:58.987884 | orchestrator | 2025-03-26 17:03:58.987892 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-03-26 17:03:58.987901 | orchestrator | Wednesday 26 March 2025 17:03:49 +0000 (0:00:10.599) 0:09:15.589 ******* 2025-03-26 17:03:58.987907 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.987913 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.987919 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.987925 | orchestrator | 2025-03-26 17:03:58.987930 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-03-26 17:03:58.987936 | orchestrator | Wednesday 26 March 2025 17:03:50 +0000 (0:00:00.813) 0:09:16.402 ******* 2025-03-26 17:03:58.987942 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.987948 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.987954 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.987959 | orchestrator | 2025-03-26 17:03:58.987965 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-03-26 17:03:58.987971 | orchestrator | Wednesday 26 March 2025 17:03:51 +0000 (0:00:00.760) 0:09:17.163 ******* 2025-03-26 17:03:58.987977 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.987983 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.987988 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.987994 | orchestrator | 2025-03-26 17:03:58.988000 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-03-26 17:03:58.988006 | orchestrator | Wednesday 26 March 2025 17:03:51 +0000 (0:00:00.790) 0:09:17.954 ******* 2025-03-26 17:03:58.988011 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.988017 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.988023 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.988029 | orchestrator | 2025-03-26 17:03:58.988035 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-03-26 17:03:58.988040 | orchestrator | Wednesday 26 March 2025 17:03:52 +0000 (0:00:00.383) 0:09:18.337 ******* 2025-03-26 17:03:58.988046 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.988052 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.988058 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.988063 | orchestrator | 2025-03-26 17:03:58.988069 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-03-26 17:03:58.988075 | orchestrator | Wednesday 26 March 2025 17:03:53 +0000 (0:00:00.709) 0:09:19.047 ******* 2025-03-26 17:03:58.988081 | orchestrator | skipping: [testbed-node-0] 2025-03-26 17:03:58.988086 | orchestrator | skipping: [testbed-node-1] 2025-03-26 17:03:58.988092 | orchestrator | skipping: [testbed-node-2] 2025-03-26 17:03:58.988098 | orchestrator | 2025-03-26 17:03:58.988104 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-03-26 17:03:58.988110 | orchestrator | Wednesday 26 March 2025 17:03:53 +0000 (0:00:00.583) 0:09:19.630 ******* 2025-03-26 17:03:58.988118 | orchestrator | ok: [testbed-node-0] 2025-03-26 17:03:58.988124 | orchestrator | ok: [testbed-node-1] 2025-03-26 17:03:58.988130 | orchestrator | ok: [testbed-node-2] 2025-03-26 17:03:58.988136 | orchestrator | 2025-03-26 17:03:58.988142 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-03-26 17:03:58.988148 | orchestrator | Wednesday 26 March 2025 17:03:55 +0000 (0:00:01.626) 0:09:21.257 ******* 2025-03-26 17:03:58.988153 | orchestrator | ok: [testbed-node-0] 2025-03-26 17:03:58.988159 | orchestrator | ok: [testbed-node-1] 2025-03-26 17:03:58.988165 | orchestrator | ok: [testbed-node-2] 2025-03-26 17:03:58.988173 | orchestrator | 2025-03-26 17:03:58.988179 | orchestrator | PLAY RECAP ********************************************************************* 2025-03-26 17:03:58.988185 | orchestrator | testbed-node-0 : ok=127  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-03-26 17:03:58.988191 | orchestrator | testbed-node-1 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-03-26 17:03:58.988197 | orchestrator | testbed-node-2 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-03-26 17:03:58.988203 | orchestrator | 2025-03-26 17:03:58.988209 | orchestrator | 2025-03-26 17:03:58.988214 | orchestrator | TASKS RECAP ******************************************************************** 2025-03-26 17:03:58.988220 | orchestrator | Wednesday 26 March 2025 17:03:56 +0000 (0:00:01.514) 0:09:22.772 ******* 2025-03-26 17:03:58.988226 | orchestrator | =============================================================================== 2025-03-26 17:03:58.988232 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 12.19s 2025-03-26 17:03:58.988238 | orchestrator | loadbalancer : Start backup haproxy container -------------------------- 10.95s 2025-03-26 17:03:58.988243 | orchestrator | loadbalancer : Start backup keepalived container ----------------------- 10.60s 2025-03-26 17:03:58.988249 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 9.58s 2025-03-26 17:03:58.988255 | orchestrator | haproxy-config : Copying over heat haproxy config ----------------------- 9.31s 2025-03-26 17:03:58.988261 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 9.19s 2025-03-26 17:03:58.988267 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 8.79s 2025-03-26 17:03:58.988272 | orchestrator | loadbalancer : Ensuring proxysql service config subdirectories exist ---- 8.09s 2025-03-26 17:03:58.988278 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 7.92s 2025-03-26 17:03:58.988284 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 7.72s 2025-03-26 17:03:58.988290 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 7.31s 2025-03-26 17:03:58.988296 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 7.16s 2025-03-26 17:03:58.988301 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 6.63s 2025-03-26 17:03:58.988310 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 6.59s 2025-03-26 17:03:58.988318 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 6.57s 2025-03-26 17:04:02.028756 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 6.47s 2025-03-26 17:04:02.028953 | orchestrator | loadbalancer : Removing checks for services which are disabled ---------- 6.43s 2025-03-26 17:04:02.028984 | orchestrator | loadbalancer : Remove mariadb.cfg if proxysql enabled ------------------- 6.17s 2025-03-26 17:04:02.029009 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 6.16s 2025-03-26 17:04:02.029032 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.90s 2025-03-26 17:04:02.029057 | orchestrator | 2025-03-26 17:03:58 | INFO  | Task 136f9e71-ea3c-475a-ac00-8f03ebbcf4ee is in state STARTED 2025-03-26 17:04:02.029121 | orchestrator | 2025-03-26 17:03:58 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:04:02.029169 | orchestrator | 2025-03-26 17:04:02 | INFO  | Task d65e1e7e-e096-408f-b62d-51dda8e1cc6e is in state STARTED 2025-03-26 17:04:02.029291 | orchestrator | 2025-03-26 17:04:02 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:04:02.029314 | orchestrator | 2025-03-26 17:04:02 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:04:02.030312 | orchestrator | 2025-03-26 17:04:02 | INFO  | Task 136f9e71-ea3c-475a-ac00-8f03ebbcf4ee is in state STARTED 2025-03-26 17:04:05.077695 | orchestrator | 2025-03-26 17:04:02 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:04:05.077878 | orchestrator | 2025-03-26 17:04:05 | INFO  | Task d65e1e7e-e096-408f-b62d-51dda8e1cc6e is in state STARTED 2025-03-26 17:04:05.087996 | orchestrator | 2025-03-26 17:04:05 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:04:05.096158 | orchestrator | 2025-03-26 17:04:05 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:04:05.096191 | orchestrator | 2025-03-26 17:04:05 | INFO  | Task 136f9e71-ea3c-475a-ac00-8f03ebbcf4ee is in state STARTED 2025-03-26 17:04:08.141742 | orchestrator | 2025-03-26 17:04:05 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:04:08.141923 | orchestrator | 2025-03-26 17:04:08 | INFO  | Task d65e1e7e-e096-408f-b62d-51dda8e1cc6e is in state STARTED 2025-03-26 17:04:08.144057 | orchestrator | 2025-03-26 17:04:08 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:04:08.144088 | orchestrator | 2025-03-26 17:04:08 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:04:08.145602 | orchestrator | 2025-03-26 17:04:08 | INFO  | Task 136f9e71-ea3c-475a-ac00-8f03ebbcf4ee is in state STARTED 2025-03-26 17:04:08.145947 | orchestrator | 2025-03-26 17:04:08 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:04:11.195451 | orchestrator | 2025-03-26 17:04:11 | INFO  | Task d65e1e7e-e096-408f-b62d-51dda8e1cc6e is in state STARTED 2025-03-26 17:04:11.195574 | orchestrator | 2025-03-26 17:04:11 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:04:11.196737 | orchestrator | 2025-03-26 17:04:11 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:04:11.198458 | orchestrator | 2025-03-26 17:04:11 | INFO  | Task 136f9e71-ea3c-475a-ac00-8f03ebbcf4ee is in state STARTED 2025-03-26 17:04:11.198562 | orchestrator | 2025-03-26 17:04:11 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:04:14.243030 | orchestrator | 2025-03-26 17:04:14 | INFO  | Task d65e1e7e-e096-408f-b62d-51dda8e1cc6e is in state STARTED 2025-03-26 17:04:14.244116 | orchestrator | 2025-03-26 17:04:14 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:04:14.248269 | orchestrator | 2025-03-26 17:04:14 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:04:14.249454 | orchestrator | 2025-03-26 17:04:14 | INFO  | Task 136f9e71-ea3c-475a-ac00-8f03ebbcf4ee is in state STARTED 2025-03-26 17:04:14.249571 | orchestrator | 2025-03-26 17:04:14 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:04:17.317739 | orchestrator | 2025-03-26 17:04:17 | INFO  | Task d65e1e7e-e096-408f-b62d-51dda8e1cc6e is in state STARTED 2025-03-26 17:04:17.319470 | orchestrator | 2025-03-26 17:04:17 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:04:17.319585 | orchestrator | 2025-03-26 17:04:17 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:04:17.320544 | orchestrator | 2025-03-26 17:04:17 | INFO  | Task 136f9e71-ea3c-475a-ac00-8f03ebbcf4ee is in state STARTED 2025-03-26 17:04:20.369344 | orchestrator | 2025-03-26 17:04:17 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:04:20.369492 | orchestrator | 2025-03-26 17:04:20 | INFO  | Task d65e1e7e-e096-408f-b62d-51dda8e1cc6e is in state STARTED 2025-03-26 17:04:20.369934 | orchestrator | 2025-03-26 17:04:20 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:04:20.371474 | orchestrator | 2025-03-26 17:04:20 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:04:20.373710 | orchestrator | 2025-03-26 17:04:20 | INFO  | Task 136f9e71-ea3c-475a-ac00-8f03ebbcf4ee is in state STARTED 2025-03-26 17:04:23.436403 | orchestrator | 2025-03-26 17:04:20 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:04:23.436552 | orchestrator | 2025-03-26 17:04:23 | INFO  | Task d65e1e7e-e096-408f-b62d-51dda8e1cc6e is in state STARTED 2025-03-26 17:04:23.439171 | orchestrator | 2025-03-26 17:04:23 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:04:23.440509 | orchestrator | 2025-03-26 17:04:23 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:04:23.446374 | orchestrator | 2025-03-26 17:04:23 | INFO  | Task 136f9e71-ea3c-475a-ac00-8f03ebbcf4ee is in state STARTED 2025-03-26 17:04:26.519880 | orchestrator | 2025-03-26 17:04:23 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:04:26.520023 | orchestrator | 2025-03-26 17:04:26 | INFO  | Task d65e1e7e-e096-408f-b62d-51dda8e1cc6e is in state STARTED 2025-03-26 17:04:26.521556 | orchestrator | 2025-03-26 17:04:26 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:04:26.521588 | orchestrator | 2025-03-26 17:04:26 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:04:26.521609 | orchestrator | 2025-03-26 17:04:26 | INFO  | Task 136f9e71-ea3c-475a-ac00-8f03ebbcf4ee is in state STARTED 2025-03-26 17:04:26.524045 | orchestrator | 2025-03-26 17:04:26 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:04:29.559056 | orchestrator | 2025-03-26 17:04:29 | INFO  | Task d65e1e7e-e096-408f-b62d-51dda8e1cc6e is in state STARTED 2025-03-26 17:04:29.559307 | orchestrator | 2025-03-26 17:04:29 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:04:29.560017 | orchestrator | 2025-03-26 17:04:29 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:04:29.561145 | orchestrator | 2025-03-26 17:04:29 | INFO  | Task 136f9e71-ea3c-475a-ac00-8f03ebbcf4ee is in state STARTED 2025-03-26 17:04:32.618161 | orchestrator | 2025-03-26 17:04:29 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:04:32.618230 | orchestrator | 2025-03-26 17:04:32 | INFO  | Task d65e1e7e-e096-408f-b62d-51dda8e1cc6e is in state STARTED 2025-03-26 17:04:32.618999 | orchestrator | 2025-03-26 17:04:32 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:04:32.621009 | orchestrator | 2025-03-26 17:04:32 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:04:32.621910 | orchestrator | 2025-03-26 17:04:32 | INFO  | Task 136f9e71-ea3c-475a-ac00-8f03ebbcf4ee is in state STARTED 2025-03-26 17:04:32.622074 | orchestrator | 2025-03-26 17:04:32 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:04:35.671941 | orchestrator | 2025-03-26 17:04:35 | INFO  | Task d65e1e7e-e096-408f-b62d-51dda8e1cc6e is in state STARTED 2025-03-26 17:04:35.674108 | orchestrator | 2025-03-26 17:04:35 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:04:35.677003 | orchestrator | 2025-03-26 17:04:35 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:04:35.681085 | orchestrator | 2025-03-26 17:04:35 | INFO  | Task 136f9e71-ea3c-475a-ac00-8f03ebbcf4ee is in state STARTED 2025-03-26 17:04:38.726930 | orchestrator | 2025-03-26 17:04:35 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:04:38.726973 | orchestrator | 2025-03-26 17:04:38 | INFO  | Task d65e1e7e-e096-408f-b62d-51dda8e1cc6e is in state STARTED 2025-03-26 17:04:38.727889 | orchestrator | 2025-03-26 17:04:38 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:04:38.729543 | orchestrator | 2025-03-26 17:04:38 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:04:38.730650 | orchestrator | 2025-03-26 17:04:38 | INFO  | Task 136f9e71-ea3c-475a-ac00-8f03ebbcf4ee is in state STARTED 2025-03-26 17:04:38.730759 | orchestrator | 2025-03-26 17:04:38 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:04:41.805991 | orchestrator | 2025-03-26 17:04:41 | INFO  | Task d65e1e7e-e096-408f-b62d-51dda8e1cc6e is in state STARTED 2025-03-26 17:04:41.807887 | orchestrator | 2025-03-26 17:04:41 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:04:41.809343 | orchestrator | 2025-03-26 17:04:41 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:04:41.811235 | orchestrator | 2025-03-26 17:04:41 | INFO  | Task 136f9e71-ea3c-475a-ac00-8f03ebbcf4ee is in state STARTED 2025-03-26 17:04:41.811327 | orchestrator | 2025-03-26 17:04:41 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:04:44.852718 | orchestrator | 2025-03-26 17:04:44 | INFO  | Task d65e1e7e-e096-408f-b62d-51dda8e1cc6e is in state STARTED 2025-03-26 17:04:47.892712 | orchestrator | 2025-03-26 17:04:44 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:04:47.892861 | orchestrator | 2025-03-26 17:04:44 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:04:47.892883 | orchestrator | 2025-03-26 17:04:44 | INFO  | Task 136f9e71-ea3c-475a-ac00-8f03ebbcf4ee is in state STARTED 2025-03-26 17:04:47.892899 | orchestrator | 2025-03-26 17:04:44 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:04:47.892931 | orchestrator | 2025-03-26 17:04:47 | INFO  | Task d65e1e7e-e096-408f-b62d-51dda8e1cc6e is in state STARTED 2025-03-26 17:04:47.894717 | orchestrator | 2025-03-26 17:04:47 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:04:47.896028 | orchestrator | 2025-03-26 17:04:47 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:04:47.898169 | orchestrator | 2025-03-26 17:04:47 | INFO  | Task 136f9e71-ea3c-475a-ac00-8f03ebbcf4ee is in state STARTED 2025-03-26 17:04:50.939165 | orchestrator | 2025-03-26 17:04:47 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:04:50.939300 | orchestrator | 2025-03-26 17:04:50 | INFO  | Task d65e1e7e-e096-408f-b62d-51dda8e1cc6e is in state STARTED 2025-03-26 17:04:50.941202 | orchestrator | 2025-03-26 17:04:50 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:04:50.942669 | orchestrator | 2025-03-26 17:04:50 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:04:50.944200 | orchestrator | 2025-03-26 17:04:50 | INFO  | Task 136f9e71-ea3c-475a-ac00-8f03ebbcf4ee is in state STARTED 2025-03-26 17:04:53.992099 | orchestrator | 2025-03-26 17:04:50 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:04:53.992263 | orchestrator | 2025-03-26 17:04:53 | INFO  | Task d65e1e7e-e096-408f-b62d-51dda8e1cc6e is in state STARTED 2025-03-26 17:04:53.994971 | orchestrator | 2025-03-26 17:04:53 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:04:53.996324 | orchestrator | 2025-03-26 17:04:53 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:04:53.997602 | orchestrator | 2025-03-26 17:04:53 | INFO  | Task 136f9e71-ea3c-475a-ac00-8f03ebbcf4ee is in state STARTED 2025-03-26 17:04:57.063263 | orchestrator | 2025-03-26 17:04:53 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:04:57.063498 | orchestrator | 2025-03-26 17:04:57 | INFO  | Task d65e1e7e-e096-408f-b62d-51dda8e1cc6e is in state STARTED 2025-03-26 17:04:57.064576 | orchestrator | 2025-03-26 17:04:57 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:04:57.064620 | orchestrator | 2025-03-26 17:04:57 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:04:57.068117 | orchestrator | 2025-03-26 17:04:57 | INFO  | Task 136f9e71-ea3c-475a-ac00-8f03ebbcf4ee is in state STARTED 2025-03-26 17:05:00.121214 | orchestrator | 2025-03-26 17:04:57 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:05:00.121312 | orchestrator | 2025-03-26 17:05:00 | INFO  | Task d65e1e7e-e096-408f-b62d-51dda8e1cc6e is in state STARTED 2025-03-26 17:05:00.124999 | orchestrator | 2025-03-26 17:05:00 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:05:00.127506 | orchestrator | 2025-03-26 17:05:00 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:05:00.129186 | orchestrator | 2025-03-26 17:05:00 | INFO  | Task 136f9e71-ea3c-475a-ac00-8f03ebbcf4ee is in state STARTED 2025-03-26 17:05:03.173540 | orchestrator | 2025-03-26 17:05:00 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:05:03.173665 | orchestrator | 2025-03-26 17:05:03 | INFO  | Task d65e1e7e-e096-408f-b62d-51dda8e1cc6e is in state STARTED 2025-03-26 17:05:03.174618 | orchestrator | 2025-03-26 17:05:03 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:05:03.175958 | orchestrator | 2025-03-26 17:05:03 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:05:03.177447 | orchestrator | 2025-03-26 17:05:03 | INFO  | Task 136f9e71-ea3c-475a-ac00-8f03ebbcf4ee is in state STARTED 2025-03-26 17:05:06.243101 | orchestrator | 2025-03-26 17:05:03 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:05:06.243237 | orchestrator | 2025-03-26 17:05:06 | INFO  | Task d65e1e7e-e096-408f-b62d-51dda8e1cc6e is in state STARTED 2025-03-26 17:05:06.244274 | orchestrator | 2025-03-26 17:05:06 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:05:06.246488 | orchestrator | 2025-03-26 17:05:06 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:05:06.248631 | orchestrator | 2025-03-26 17:05:06 | INFO  | Task 136f9e71-ea3c-475a-ac00-8f03ebbcf4ee is in state STARTED 2025-03-26 17:05:09.305319 | orchestrator | 2025-03-26 17:05:06 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:05:09.305432 | orchestrator | 2025-03-26 17:05:09 | INFO  | Task d65e1e7e-e096-408f-b62d-51dda8e1cc6e is in state STARTED 2025-03-26 17:05:09.307162 | orchestrator | 2025-03-26 17:05:09 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:05:09.309362 | orchestrator | 2025-03-26 17:05:09 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:05:09.311261 | orchestrator | 2025-03-26 17:05:09 | INFO  | Task 136f9e71-ea3c-475a-ac00-8f03ebbcf4ee is in state STARTED 2025-03-26 17:05:12.366254 | orchestrator | 2025-03-26 17:05:09 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:05:12.366370 | orchestrator | 2025-03-26 17:05:12 | INFO  | Task d65e1e7e-e096-408f-b62d-51dda8e1cc6e is in state STARTED 2025-03-26 17:05:12.367861 | orchestrator | 2025-03-26 17:05:12 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:05:12.370123 | orchestrator | 2025-03-26 17:05:12 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:05:12.372031 | orchestrator | 2025-03-26 17:05:12 | INFO  | Task 136f9e71-ea3c-475a-ac00-8f03ebbcf4ee is in state STARTED 2025-03-26 17:05:15.421664 | orchestrator | 2025-03-26 17:05:12 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:05:15.421869 | orchestrator | 2025-03-26 17:05:15 | INFO  | Task d65e1e7e-e096-408f-b62d-51dda8e1cc6e is in state STARTED 2025-03-26 17:05:15.422838 | orchestrator | 2025-03-26 17:05:15 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:05:15.424120 | orchestrator | 2025-03-26 17:05:15 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:05:15.426367 | orchestrator | 2025-03-26 17:05:15 | INFO  | Task 136f9e71-ea3c-475a-ac00-8f03ebbcf4ee is in state STARTED 2025-03-26 17:05:18.471126 | orchestrator | 2025-03-26 17:05:15 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:05:18.471271 | orchestrator | 2025-03-26 17:05:18 | INFO  | Task d65e1e7e-e096-408f-b62d-51dda8e1cc6e is in state STARTED 2025-03-26 17:05:18.474292 | orchestrator | 2025-03-26 17:05:18 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:05:18.476882 | orchestrator | 2025-03-26 17:05:18 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:05:18.480413 | orchestrator | 2025-03-26 17:05:18 | INFO  | Task 136f9e71-ea3c-475a-ac00-8f03ebbcf4ee is in state STARTED 2025-03-26 17:05:18.480862 | orchestrator | 2025-03-26 17:05:18 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:05:21.532594 | orchestrator | 2025-03-26 17:05:21 | INFO  | Task d65e1e7e-e096-408f-b62d-51dda8e1cc6e is in state STARTED 2025-03-26 17:05:21.539344 | orchestrator | 2025-03-26 17:05:21 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:05:21.539900 | orchestrator | 2025-03-26 17:05:21 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:05:21.541593 | orchestrator | 2025-03-26 17:05:21 | INFO  | Task 136f9e71-ea3c-475a-ac00-8f03ebbcf4ee is in state STARTED 2025-03-26 17:05:21.542140 | orchestrator | 2025-03-26 17:05:21 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:05:24.605915 | orchestrator | 2025-03-26 17:05:24 | INFO  | Task d65e1e7e-e096-408f-b62d-51dda8e1cc6e is in state STARTED 2025-03-26 17:05:24.608821 | orchestrator | 2025-03-26 17:05:24 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:05:24.615526 | orchestrator | 2025-03-26 17:05:24 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:05:27.674599 | orchestrator | 2025-03-26 17:05:24 | INFO  | Task 136f9e71-ea3c-475a-ac00-8f03ebbcf4ee is in state STARTED 2025-03-26 17:05:27.674758 | orchestrator | 2025-03-26 17:05:24 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:05:27.674799 | orchestrator | 2025-03-26 17:05:27 | INFO  | Task d65e1e7e-e096-408f-b62d-51dda8e1cc6e is in state STARTED 2025-03-26 17:05:27.677077 | orchestrator | 2025-03-26 17:05:27 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:05:27.680475 | orchestrator | 2025-03-26 17:05:27 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:05:27.682228 | orchestrator | 2025-03-26 17:05:27 | INFO  | Task 136f9e71-ea3c-475a-ac00-8f03ebbcf4ee is in state STARTED 2025-03-26 17:05:30.717822 | orchestrator | 2025-03-26 17:05:27 | INFO  | Wait 1 second(s) until the next check 2025-03-26 17:05:30.717962 | orchestrator | 2025-03-26 17:05:30 | INFO  | Task d65e1e7e-e096-408f-b62d-51dda8e1cc6e is in state STARTED 2025-03-26 17:05:30.719670 | orchestrator | 2025-03-26 17:05:30 | INFO  | Task 41baa84b-a087-4b66-8b6f-7f993067aa7e is in state STARTED 2025-03-26 17:05:30.720510 | orchestrator | 2025-03-26 17:05:30 | INFO  | Task 28a05312-3755-4e41-b843-1b5ac5e1b438 is in state STARTED 2025-03-26 17:05:30.721355 | orchestrator | 2025-03-26 17:05:30 | INFO  | Task 136f9e71-ea3c-475a-ac00-8f03ebbcf4ee is in state STARTED 2025-03-26 17:05:31.109443 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-03-26 17:05:31.116793 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-03-26 17:05:31.822424 | 2025-03-26 17:05:31.822591 | PLAY [Post output play] 2025-03-26 17:05:31.851841 | 2025-03-26 17:05:31.851968 | LOOP [stage-output : Register sources] 2025-03-26 17:05:31.936992 | 2025-03-26 17:05:31.937257 | TASK [stage-output : Check sudo] 2025-03-26 17:05:32.662960 | orchestrator | sudo: a password is required 2025-03-26 17:05:32.979134 | orchestrator | ok: Runtime: 0:00:00.015060 2025-03-26 17:05:32.995926 | 2025-03-26 17:05:32.996059 | LOOP [stage-output : Set source and destination for files and folders] 2025-03-26 17:05:33.039010 | 2025-03-26 17:05:33.039245 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-03-26 17:05:33.131452 | orchestrator | ok 2025-03-26 17:05:33.142120 | 2025-03-26 17:05:33.142216 | LOOP [stage-output : Ensure target folders exist] 2025-03-26 17:05:33.574368 | orchestrator | ok: "docs" 2025-03-26 17:05:33.574889 | 2025-03-26 17:05:33.817856 | orchestrator | ok: "artifacts" 2025-03-26 17:05:34.094931 | orchestrator | ok: "logs" 2025-03-26 17:05:34.113879 | 2025-03-26 17:05:34.113991 | LOOP [stage-output : Copy files and folders to staging folder] 2025-03-26 17:05:34.157507 | 2025-03-26 17:05:34.157770 | TASK [stage-output : Make all log files readable] 2025-03-26 17:05:34.461288 | orchestrator | ok 2025-03-26 17:05:34.476034 | 2025-03-26 17:05:34.476129 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-03-26 17:05:34.544381 | orchestrator | skipping: Conditional result was False 2025-03-26 17:05:34.561642 | 2025-03-26 17:05:34.561775 | TASK [stage-output : Discover log files for compression] 2025-03-26 17:05:34.597585 | orchestrator | skipping: Conditional result was False 2025-03-26 17:05:34.615624 | 2025-03-26 17:05:34.615754 | LOOP [stage-output : Archive everything from logs] 2025-03-26 17:05:34.682273 | 2025-03-26 17:05:34.682398 | PLAY [Post cleanup play] 2025-03-26 17:05:34.705267 | 2025-03-26 17:05:34.705354 | TASK [Set cloud fact (Zuul deployment)] 2025-03-26 17:05:34.768190 | orchestrator | ok 2025-03-26 17:05:34.777033 | 2025-03-26 17:05:34.777118 | TASK [Set cloud fact (local deployment)] 2025-03-26 17:05:34.821226 | orchestrator | skipping: Conditional result was False 2025-03-26 17:05:34.844188 | 2025-03-26 17:05:34.844721 | TASK [Clean the cloud environment] 2025-03-26 17:05:35.484349 | orchestrator | 2025-03-26 17:05:35 - clean up servers 2025-03-26 17:05:36.293430 | orchestrator | 2025-03-26 17:05:36 - testbed-manager 2025-03-26 17:05:36.380937 | orchestrator | 2025-03-26 17:05:36 - testbed-node-4 2025-03-26 17:05:36.471543 | orchestrator | 2025-03-26 17:05:36 - testbed-node-2 2025-03-26 17:05:36.557738 | orchestrator | 2025-03-26 17:05:36 - testbed-node-5 2025-03-26 17:05:36.647768 | orchestrator | 2025-03-26 17:05:36 - testbed-node-0 2025-03-26 17:05:36.738958 | orchestrator | 2025-03-26 17:05:36 - testbed-node-3 2025-03-26 17:05:36.836022 | orchestrator | 2025-03-26 17:05:36 - testbed-node-1 2025-03-26 17:05:36.937683 | orchestrator | 2025-03-26 17:05:36 - clean up keypairs 2025-03-26 17:05:36.955155 | orchestrator | 2025-03-26 17:05:36 - testbed 2025-03-26 17:05:36.981611 | orchestrator | 2025-03-26 17:05:36 - wait for servers to be gone 2025-03-26 17:05:50.369480 | orchestrator | 2025-03-26 17:05:50 - clean up ports 2025-03-26 17:05:50.575162 | orchestrator | 2025-03-26 17:05:50 - 0a2bd4b0-bd54-4e5b-bb6d-72aaffe5e48a 2025-03-26 17:05:50.802552 | orchestrator | 2025-03-26 17:05:50 - 115ee3f0-130a-4610-8154-8fb97ce4bd18 2025-03-26 17:05:51.225909 | orchestrator | 2025-03-26 17:05:51 - 41860a70-7319-4999-9c9c-a14784171338 2025-03-26 17:05:51.493015 | orchestrator | 2025-03-26 17:05:51 - 68457f67-916d-465e-94fe-5372d4d0bc01 2025-03-26 17:05:52.491213 | orchestrator | 2025-03-26 17:05:52 - 6a742607-dcf7-40c2-9ce8-a71e442caaa3 2025-03-26 17:05:52.676697 | orchestrator | 2025-03-26 17:05:52 - 6e8e3e15-3f36-48e2-bdad-170766322ba3 2025-03-26 17:05:52.906223 | orchestrator | 2025-03-26 17:05:52 - fe0f5131-f297-40ca-a2d7-3f0b5c189118 2025-03-26 17:05:53.108024 | orchestrator | 2025-03-26 17:05:53 - clean up volumes 2025-03-26 17:05:53.238614 | orchestrator | 2025-03-26 17:05:53 - testbed-volume-3-node-base 2025-03-26 17:05:53.283643 | orchestrator | 2025-03-26 17:05:53 - testbed-volume-manager-base 2025-03-26 17:05:53.322776 | orchestrator | 2025-03-26 17:05:53 - testbed-volume-0-node-base 2025-03-26 17:05:53.360967 | orchestrator | 2025-03-26 17:05:53 - testbed-volume-5-node-base 2025-03-26 17:05:53.403216 | orchestrator | 2025-03-26 17:05:53 - testbed-volume-4-node-base 2025-03-26 17:05:53.445534 | orchestrator | 2025-03-26 17:05:53 - testbed-volume-2-node-base 2025-03-26 17:05:53.486637 | orchestrator | 2025-03-26 17:05:53 - testbed-volume-8-node-2 2025-03-26 17:05:53.526089 | orchestrator | 2025-03-26 17:05:53 - testbed-volume-1-node-base 2025-03-26 17:05:53.566060 | orchestrator | 2025-03-26 17:05:53 - testbed-volume-13-node-1 2025-03-26 17:05:53.606730 | orchestrator | 2025-03-26 17:05:53 - testbed-volume-12-node-0 2025-03-26 17:05:53.646222 | orchestrator | 2025-03-26 17:05:53 - testbed-volume-0-node-0 2025-03-26 17:05:53.687474 | orchestrator | 2025-03-26 17:05:53 - testbed-volume-17-node-5 2025-03-26 17:05:53.729648 | orchestrator | 2025-03-26 17:05:53 - testbed-volume-2-node-2 2025-03-26 17:05:53.771148 | orchestrator | 2025-03-26 17:05:53 - testbed-volume-11-node-5 2025-03-26 17:05:53.813204 | orchestrator | 2025-03-26 17:05:53 - testbed-volume-6-node-0 2025-03-26 17:05:53.855776 | orchestrator | 2025-03-26 17:05:53 - testbed-volume-15-node-3 2025-03-26 17:05:53.903083 | orchestrator | 2025-03-26 17:05:53 - testbed-volume-14-node-2 2025-03-26 17:05:53.942794 | orchestrator | 2025-03-26 17:05:53 - testbed-volume-9-node-3 2025-03-26 17:05:53.985938 | orchestrator | 2025-03-26 17:05:53 - testbed-volume-1-node-1 2025-03-26 17:05:54.026080 | orchestrator | 2025-03-26 17:05:54 - testbed-volume-7-node-1 2025-03-26 17:05:54.064734 | orchestrator | 2025-03-26 17:05:54 - testbed-volume-3-node-3 2025-03-26 17:05:54.105376 | orchestrator | 2025-03-26 17:05:54 - testbed-volume-10-node-4 2025-03-26 17:05:54.146913 | orchestrator | 2025-03-26 17:05:54 - testbed-volume-16-node-4 2025-03-26 17:05:54.185665 | orchestrator | 2025-03-26 17:05:54 - testbed-volume-4-node-4 2025-03-26 17:05:54.224657 | orchestrator | 2025-03-26 17:05:54 - testbed-volume-5-node-5 2025-03-26 17:05:54.273587 | orchestrator | 2025-03-26 17:05:54 - disconnect routers 2025-03-26 17:05:54.370067 | orchestrator | 2025-03-26 17:05:54 - testbed 2025-03-26 17:05:55.082220 | orchestrator | 2025-03-26 17:05:55 - clean up subnets 2025-03-26 17:05:55.114471 | orchestrator | 2025-03-26 17:05:55 - subnet-testbed-management 2025-03-26 17:05:55.257298 | orchestrator | 2025-03-26 17:05:55 - clean up networks 2025-03-26 17:05:55.455700 | orchestrator | 2025-03-26 17:05:55 - net-testbed-management 2025-03-26 17:05:55.708836 | orchestrator | 2025-03-26 17:05:55 - clean up security groups 2025-03-26 17:05:55.747309 | orchestrator | 2025-03-26 17:05:55 - testbed-management 2025-03-26 17:05:55.830152 | orchestrator | 2025-03-26 17:05:55 - testbed-node 2025-03-26 17:05:55.910724 | orchestrator | 2025-03-26 17:05:55 - clean up floating ips 2025-03-26 17:05:55.941386 | orchestrator | 2025-03-26 17:05:55 - 81.163.192.171 2025-03-26 17:05:56.307215 | orchestrator | 2025-03-26 17:05:56 - clean up routers 2025-03-26 17:05:56.352366 | orchestrator | 2025-03-26 17:05:56 - testbed 2025-03-26 17:05:57.470038 | orchestrator | changed 2025-03-26 17:05:57.515053 | 2025-03-26 17:05:57.515150 | PLAY RECAP 2025-03-26 17:05:57.515230 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-03-26 17:05:57.515277 | 2025-03-26 17:05:57.628824 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-03-26 17:05:57.632617 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-03-26 17:05:58.309627 | 2025-03-26 17:05:58.309746 | PLAY [Base post-fetch] 2025-03-26 17:05:58.344069 | 2025-03-26 17:05:58.344176 | TASK [fetch-output : Set log path for multiple nodes] 2025-03-26 17:05:58.419478 | orchestrator | skipping: Conditional result was False 2025-03-26 17:05:58.427976 | 2025-03-26 17:05:58.428083 | TASK [fetch-output : Set log path for single node] 2025-03-26 17:05:58.471539 | orchestrator | ok 2025-03-26 17:05:58.477542 | 2025-03-26 17:05:58.477630 | LOOP [fetch-output : Ensure local output dirs] 2025-03-26 17:05:59.006640 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/dc1fbcec8af84866bbbe7ca0e0139f55/work/logs" 2025-03-26 17:05:59.339856 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/dc1fbcec8af84866bbbe7ca0e0139f55/work/artifacts" 2025-03-26 17:05:59.963819 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/dc1fbcec8af84866bbbe7ca0e0139f55/work/docs" 2025-03-26 17:05:59.979150 | 2025-03-26 17:05:59.979242 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-03-26 17:06:01.044168 | orchestrator | changed: .d..t...... ./ 2025-03-26 17:06:01.044699 | orchestrator | changed: All items complete 2025-03-26 17:06:01.044930 | 2025-03-26 17:06:01.761153 | orchestrator | changed: .d..t...... ./ 2025-03-26 17:06:02.303943 | orchestrator | changed: .d..t...... ./ 2025-03-26 17:06:02.331962 | 2025-03-26 17:06:02.332140 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-03-26 17:06:02.385906 | orchestrator | skipping: Conditional result was False 2025-03-26 17:06:02.399356 | orchestrator | skipping: Conditional result was False 2025-03-26 17:06:02.460005 | 2025-03-26 17:06:02.460101 | PLAY RECAP 2025-03-26 17:06:02.460151 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-03-26 17:06:02.460177 | 2025-03-26 17:06:02.572707 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-03-26 17:06:02.576211 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-03-26 17:06:03.446263 | 2025-03-26 17:06:03.446500 | PLAY [Base post] 2025-03-26 17:06:03.496213 | 2025-03-26 17:06:03.496408 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-03-26 17:06:04.398717 | orchestrator | changed 2025-03-26 17:06:04.466914 | 2025-03-26 17:06:04.467137 | PLAY RECAP 2025-03-26 17:06:04.467253 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-03-26 17:06:04.467403 | 2025-03-26 17:06:04.634452 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-03-26 17:06:04.640012 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-03-26 17:06:05.512957 | 2025-03-26 17:06:05.513098 | PLAY [Base post-logs] 2025-03-26 17:06:05.527442 | 2025-03-26 17:06:05.527542 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-03-26 17:06:06.052567 | localhost | changed 2025-03-26 17:06:06.058525 | 2025-03-26 17:06:06.058711 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-03-26 17:06:06.102605 | localhost | ok 2025-03-26 17:06:06.111274 | 2025-03-26 17:06:06.111383 | TASK [Set zuul-log-path fact] 2025-03-26 17:06:06.131846 | localhost | ok 2025-03-26 17:06:06.147623 | 2025-03-26 17:06:06.147738 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-03-26 17:06:06.170194 | localhost | ok 2025-03-26 17:06:06.177073 | 2025-03-26 17:06:06.177174 | TASK [upload-logs : Create log directories] 2025-03-26 17:06:07.274248 | localhost | changed 2025-03-26 17:06:07.278742 | 2025-03-26 17:06:07.278819 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-03-26 17:06:08.001321 | localhost -> localhost | ok: Runtime: 0:00:00.007493 2025-03-26 17:06:08.005378 | 2025-03-26 17:06:08.005468 | TASK [upload-logs : Upload logs to log server] 2025-03-26 17:06:08.807843 | localhost | Output suppressed because no_log was given 2025-03-26 17:06:08.810806 | 2025-03-26 17:06:08.810897 | LOOP [upload-logs : Compress console log and json output] 2025-03-26 17:06:08.926349 | localhost | skipping: Conditional result was False 2025-03-26 17:06:08.941166 | localhost | skipping: Conditional result was False 2025-03-26 17:06:08.955314 | 2025-03-26 17:06:08.955470 | LOOP [upload-logs : Upload compressed console log and json output] 2025-03-26 17:06:09.028097 | localhost | skipping: Conditional result was False 2025-03-26 17:06:09.028487 | 2025-03-26 17:06:09.045926 | localhost | skipping: Conditional result was False 2025-03-26 17:06:09.056934 | 2025-03-26 17:06:09.057062 | LOOP [upload-logs : Upload console log and json output]